Updates from: 08/22/2024 01:05:04
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
To use the Content Safety APIs, you must create your Azure AI Content Safety res
| France Central | ✅ | | | |✅ | | ✅| | West Europe | ✅ | ✅ |✅ | |✅ | |✅ | | Japan East | ✅ | | | |✅ | |✅ |
-| Australia East| ✅ | ✅ | | |✅ | ✅| ✅|
+| Australia East| ✅ | | | |✅ | ✅| ✅|
| USGov Arizona | ✅ | | | | | | | | USGov Virginia | ✅ | | | | | | |
ai-services Quickstart Groundedness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-groundedness.md
Create a new Python file named _quickstart.py_. Open the new file in your prefer
```Python import http.client import json
-
- conn = http.client.HTTPSConnection("<endpoint>/contentsafety/text:detectGroundedness?api-version=2024-02-15-preview")
+
+ endpoint = "<your_custom_subdomain>.cognitiveservices.azure.com"
+ conn = http.client.HTTPSConnection(endpoint)
payload = json.dumps({ "domain": "Generic", "task": "QnA",
Create a new Python file named _quickstart.py_. Open the new file in your prefer
"groundingSources": [ "I'm 21 years old and I need to make a decision about the next two years of my life. Within a week. I currently work for a bank that requires strict sales goals to meet. IF they aren't met three times (three months) you're canned. They pay me 10/hour and it's not unheard of to get a raise in 6ish months. The issue is, **I'm not a salesperson**. That's not my personality. I'm amazing at customer service, I have the most positive customer service \"reports\" done about me in the short time I've worked here. A coworker asked \"do you ask for people to fill these out? you have a ton\". That being said, I have a job opportunity at Chase Bank as a part time teller. What makes this decision so hard is that at my current job, I get 40 hours and Chase could only offer me 20 hours/week. Drive time to my current job is also 21 miles **one way** while Chase is literally 1.8 miles from my house, allowing me to go home for lunch. I do have an apartment and an awesome roommate that I know wont be late on his portion of rent, so paying bills with 20hours a week isn't the issue. It's the spending money and being broke all the time.\n\nI previously worked at Wal-Mart and took home just about 400 dollars every other week. So I know i can survive on this income. I just don't know whether I should go for Chase as I could definitely see myself having a career there. I'm a math major likely going to become an actuary, so Chase could provide excellent opportunities for me **eventually**." ],
- "reasoning": false
+ "reasoning": False
}) headers = { 'Ocp-Apim-Subscription-Key': '<your_subscription_key>',
ai-services Concept Custom Generative https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-generative.md
Field extraction custom generative model `2024-07-31-preview` version is only av
* **Field Description**. Provide more contextual information in description to help clarify the field that needs to be extracted. Examples include location in the document, potential field labels it can be associated with, and ways to differentiate with other terms that could be ambiguous.
-* **Variation**. Custom generative models can generalize across different document templates of the same document type. As a best practice, create a single model for all variations of a document type. Ideally, include a visual template for each type, especially for ones that
+* **Variation**. Custom generative models can generalize across different document templates of the same document type. As a best practice, create a single model for all variations of a document type. Ideally, include a visual template for each type, especially for ones that involve distinct formatting or structural elements, to improve the model's accuracy and consistency in generating or processing documents.
## Service guidance
ai-services Concept Mortgage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-mortgage-documents.md
The Document Intelligence Mortgage models use powerful Optical Character Recogni
**Supported document types:** * Uniform Residential Loan Application (Form 1003)
+* Uniform Residential Appraisal Report (Form 1004)
+* Verification of employment form (Form 1005)
* Uniform Underwriting and Transmittal Summary (Form 1008) * Closing Disclosure form
Document Intelligence v4.0 (2024-07-31-preview) supports the following tools, ap
| Feature | Resources | Model ID | |-|-|--|
-|**Mortgage model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/operation-groups?view=rest-aiservices-v4.0%20(2024-07-31-preview)&preserve-view=true)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**&bullet; prebuilt-mortgage.us.1003</br>&bullet; prebuilt-mortgage.us.1008</br>&bullet; prebuilt-mortgage.us.closingDisclosure**|
+|**Mortgage model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/operation-groups?view=rest-aiservices-v4.0%20(2024-07-31-preview)&preserve-view=true)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**&bullet; prebuilt-mortgage.us.1003</br>&bullet; prebuilt-mortgage.us.1004</br>&bullet; prebuilt-mortgage.us.1005</br>&bullet; prebuilt-mortgage.us.1008</br>&bullet; prebuilt-mortgage.us.closingDisclosure**|
::: moniker-end ## Input requirements
The following are the fields extracted from a 1003 URLA form in the JSON output
|`Property.MixedUseProperty`|`selectionGroup`|Is the property a mixed-use property?|:selected: NO:unselected: YES| |`Property.ManufacturedHome`|`selectionGroup`|Is the property a manufactured home?|:selected: NO:unselected: YES|
-## [2024-02-29-preview](#tab/2024-02-29-preview)
--
-### Supported languages and locales
---
-| Supported Languages | Details |
-|:--|:-:|
-|English|United States (`en-US`)|
--
-### Supported document fields
-
-#### mortgage.us.1003
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`LenderLoanNumber`|`string`|Lender loan number or universal loan identifier|10Bx939c5543TqA1144M999143X38|
-|`AgencyCaseNumber`|`string`|Agency case number|115894|
-|`Borrower`|`object`|||
-|`Borrower.Name`|`string`|Borrower's full name as written on the form|Gwen Stacy|
-|`Borrower.SocialSecurityNumber`|`string`|Borrower's social security number|557-99-7283|
-|`Borrower.BirthDate`|`date`|Borrower's date of birth|11/07/1989|
-|`Borrower.CitizenshipType`|`selectionGroup`|Borrower's citizenship|:selected: U.S. Citizen<br>:unselected: Permanent Resident Alien<br>:unselected: Non-Permanent Resident Alien|
-|`Borrower.CreditApplicationType`|`selectionGroup`|Borrower's credit type|:selected: I'm applying for individual credit.<br>:unselected: I'm applying for joint credit.|
-|`Borrower.NumberOfBorrowers`|`integer`|Total number of borrowers|1|
-|`Borrower.MaritalStatus`|`selectionGroup`|Borrower's marital status|:selected: Married<br>:unselected: Separated<br>:unselected: Unmarried|
-|`Borrower.NumberOfDependents`|`integer`|Total number of borrower's dependents|2|
-|`Borrower.DependentsAges`|`string`|Age of borrower's dependents|10, 11|
-|`Borrower.HomePhoneNumber`|`phoneNumber`|Borrower's home phone number|(818) 246-8900|
-|`Borrower.CellPhoneNumber`|`phoneNumber`|Borrower's cell phone number|(831) 728-4766|
-|`Borrower.WorkPhoneNumber`|`phoneNumber`|Borrower's work phone number|(987) 213-5674|
-|`Borrower.CurrentAddress`|`address`|Borrower's current address|1634 W Glenoaks Blvd<br>Glendale CA 91201 United States|
-|`Borrower.YearsInCurrentAddress`|`integer`|Years in current address|1|
-|`Borrower.MonthsInCurrentAddress`|`integer`|Months in current address|1|
-|`Borrower.CurrentHousingExpenseType`|`selectionGroup`|Borrower's housing expense type|:unselected: No primary housing expense:selected: Own:unselected: Rent|
-|`Borrower.CurrentMonthlyRent`|`number`|Borrower's monthly rent|1,600.00|
-|`Borrower.SignedDate`|`date`|Borrower's signature date|03/16/2021|
-|`CoBorrower`|`object`|||
-|`CoBorrower.Names`|`string`|Coborrowers' names|Peter Parker<br>Mary Jane Watson|
-|`CoBorrower.SignedDate`|`date`|Coborrower's signature date|03/16/2021|
-|`CurrentEmployment`|`object`|||
-|`CurrentEmployment.DoesNotApply`|`boolean`|Checkbox state of 'Doesn't apply'|:selected:|
-|`CurrentEmployment.EmployerName`|`string`|Borrower's employer or business name|Spider Wb Corp.|
-|`CurrentEmployment.EmployerPhoneNumber`|`phoneNumber`|Borrower's employer phone number|(390) 353-2474|
-|`CurrentEmployment.EmployerAddress`|`address`|Borrower's employer address|3533 Band Ave<br>Glendale CA 92506 United States|
-|`CurrentEmployment.PositionOrTitle`|`string`|Borrower's position or title|Language Teacher|
-|`CurrentEmployment.StartDate`|`date`|Borrower's employment start date|01/08/2020|
-|`CurrentEmployment.GrossMonthlyIncomeTotal`|`number`|Borrower's gross monthly income total|4,254.00|
-|`Loan`|`object`|||
-|`Loan.Amount`|`number`|Loan amount|156,000.00|
-|`Loan.PurposeType`|`selectionGroup`|Loan purpose type|:unselected: Purchase:selected: Refinance:unselected: Other|
-|`Loan.OtherPurpose`|`string`|Other loan purpose type|Construction|
-|`Loan.RefinanceType`|`selectionGroup`|Loan refinance type|:selected: No Cash Out<br>:unselected: Limited Cash Out<br>:unselected: Cash Out|
-|`Loan.RefinanceProgramType`|`selectionGroup`|Loan refinance program type|:unselected: Full Documentation:selected: Interest Rate Reduction<br>:unselected: Streamlined without Appraisal<br>:unselected: Other|
-|`Loan.OtherRefinanceProgram`|`string`|Other loan refinance program type|Cash-out refinance|
-|`Property`|`object`|||
-|`Property.Address`|`address`|Property address|1634 W Glenoaks Blvd<br>Glendale CA 91201 Los Angeles|
-|`Property.NumberOfUnits`|`integer`|Number of units|1|
-|`Property.Value`|`number`|Property value|200,000.00|
-|`Property.OccupancyStatus`|`selectionGroup`|Property occupancy status|:selected: Primary Residence<br>:unselected: Second Home<br>:unselected: Investment Property|
-|`Property.IsFhaSecondaryResidence`|`boolean`|Checkbox state of '`FHA` Secondary Residence'|:unselected:|
-|`Property.MixedUseProperty`|`selectionGroup`|Is the property a mixed-use property?|:selected: NO:unselected: YES|
-|`Property.ManufacturedHome`|`selectionGroup`|Is the property a manufactured home?|:selected: NO:unselected: YES|
--
-The 1003 URLA key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
## Field extraction 1004 Uniform Residential Appraisal Report (URAR) The following are the fields extracted from a 1004 URAR form in the JSON output response.
The mortgage closing disclosure key-value pairs and line items extracted are in
* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). * Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
+
ai-services Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/assistants.md
Title: Azure OpenAI Service Assistants API concepts
description: Learn about the concepts behind the Azure OpenAI Assistants API. Previously updated : 03/04/2024 Last updated : 08/21/2024 --++ recommendations: false
The Assistants API has support for several parameters that let you customize the
## Context window management
-Assistants automatically truncates text to ensure it stays within the model's maximum context length. You can customize this behavior by specifying the maximum tokens you'd like a run to utilize and/or the maximum number of recent messages you'd like to include in a run.
+Assistants automatically truncate text to ensure it stays within the model's maximum context length. You can customize this behavior by specifying the maximum tokens you'd like a run to utilize and/or the maximum number of recent messages you'd like to include in a run.
### Max completion and max prompt tokens To control the token usage in a single Run, set `max_prompt_tokens` and `max_completion_tokens` when you create the Run. These limits apply to the total number of tokens used in all completions throughout the Run's lifecycle.
-For example, initiating a Run with `max_prompt_tokens` set to 500 and `max_completion_tokens` set to 1000 means the first completion will truncate the thread to 500 tokens and cap the output at 1000 tokens. If only 200 prompt tokens and 300 completion tokens are used in the first completion, the second completion will have available limits of 300 prompt tokens and 700 completion tokens.
+For example, initiating a Run with `max_prompt_tokens` set to 500 and `max_completion_tokens` set to 1000 means the first completion will truncate the thread to 500 tokens and cap the output at 1,000 tokens. If only 200 prompt tokens and 300 completion tokens are used in the first completion, the second completion will have available limits of 300 prompt tokens and 700 completion tokens.
If a completion reaches the `max_completion_tokens` limit, the Run will terminate with a status of incomplete, and details will be provided in the `incomplete_details` field of the Run object.
When using the File Search tool, we recommend setting the `max_prompt_tokens` to
## Truncation strategy
-You may also specify a truncation strategy to control how your thread should be rendered into the model's context window. Using a truncation strategy of type `auto` will use OpenAI's default truncation strategy. Using a truncation strategy of type `last_messages` will allow you to specify the number of the most recent messages to include in the context window.
+You can also specify a truncation strategy to control how your thread should be rendered into the model's context window. Using a truncation strategy of type `auto` will use OpenAI's default truncation strategy. Using a truncation strategy of type `last_messages` will allow you to specify the number of the most recent messages to include in the context window.
## See also * Learn more about Assistants and [File Search](../how-to/file-search.md)
ai-services Content Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-credentials.md
# Content Credentials
-With the improved quality of content from generative AI models, there is an increased need for more transparency on the origin of AI-generated content. All AI-generated images from the Azure OpenAI service now include Content Credentials, a tamper-evident way to disclose the origin and history of content. Content Credentials are based on an open technical specification from the [Coalition for Content Provenance and Authenticity (C2PA)](https://www.c2pa.org), a Joint Development Foundation project.
+With the improved quality of content from generative AI models, there is an increased need for more transparency on the origin of AI-generated content. All AI-generated images from Azure OpenAI Service now include Content Credentials, a tamper-evident way to disclose the origin and history of content. Content Credentials are based on an open technical specification from the [Coalition for Content Provenance and Authenticity (C2PA)](https://www.c2pa.org), a Joint Development Foundation project.
## What are Content Credentials?
The manifest contains several key pieces of information:
| Field name | Field content | | | | | `"description"` | This field has a value of `"AI Generated Image"` for all DALL-E model generated images, attesting to the AI-generated nature of the image. |
-| `"softwareAgent"` | This field has a value of `"Azure OpenAI DALL-E"` for all images generated by DALL-E series models in the Azure OpenAI service. |
+| `"softwareAgent"` | This field has a value of `"Azure OpenAI DALL-E"` for all images generated by DALL-E series models in Azure OpenAI Service. |
|`"when"` |The timestamp of when the Content Credentials were created. |
-Content Credentials in the Azure OpenAI Service can help people understand when visual content is AI-generated. For more information on how to responsibly build solutions with Azure OpenAI service image-generation models, visit the [Azure OpenAI transparency note](/legal/cognitive-services/openai/transparency-note?tabs=text).
+Content Credentials in the Azure OpenAI Service can help people understand when visual content is AI-generated. For more information on how to responsibly build solutions with Azure OpenAI Service image-generation models, visit the [Azure OpenAI transparency note](/legal/cognitive-services/openai/transparency-note?tabs=text).
## How do I leverage Content Credentials in my solution today?
ai-services Model Retirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md
description: Learn about the model deprecations and retirements in Azure OpenAI. Previously updated : 08/14/2024 Last updated : 08/21/2024
These models are currently available for use in Azure OpenAI Service.
**<sup>1</sup>** We will notify all customers with these preview deployments at least 30 days before the start of the upgrades. We will publish an upgrade schedule detailing the order of regions and model versions that we will follow during the upgrades, and link to that schedule from here.
+> [!IMPORTANT]
+> Vision enhancements preview features including Optical Character Recognition (OCR), object grounding, video prompts will be retired and no longer available once `gpt-4` Version: `vision-preview` is upgraded to `turbo-2024-04-09`. If you are currently relying on any of these preview features, this automatic model upgrade will be a breaking change.
## Deprecated models
If you're an existing customer looking for information about these models, see [
* Updated `gpt-4` preview model upgrade date to November 15, 2024 or later for the following versions: * 1106-preview * 0125-preview
- * vision-preview
+ * vision-preview (Vision enhancements feature will no longer be supported once this model is retired/upgraded.)
### July 18, 2024
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
You can deploy to a standalone Teams app directly from Azure OpenAI Studio. Foll
> [!NOTE] > The citation experience is available in **Debug (Edge)** or **Debug (Chrome)** only.
-1. After you've tested your copilot, you can provision, deploy, and publish your Teams app by selecting the **Teams Toolkit Extension** on the left pane in Visual Studio Code. Run the separate provision, deploy, and publish stages in the **Lifecycle** section. You may be asked to sign in to your Microsoft 365 account where you have permissions to upload custom apps and your Azure Account.
+1. After you've tested your copilot, you can provision, deploy, and publish your Teams app by selecting the **Teams Toolkit Extension** on the left pane in Visual Studio Code. Run the separate provision, deploy, and publish stages in the **Lifecycle** section. You might be asked to sign in to your Microsoft 365 account where you have permissions to upload custom apps and your Azure Account.
1. Provision your app: (detailed instructions in [Provision cloud resources](/microsoftteams/platform/toolkit/provision))
Once you select add your dataset, you can use the **System message** section in
**Define a role**
-You can define a role that you want your assistant. For example, if you are building a support bot, you can add *"You are an expert incident support assistant that helps users solve new issues."*.
+You can define a role that you want your assistant. For example, if you are building a support bot, you can add *"You are an expert incident support assistant that helps users solve new issues."*
**Define the type of data being retrieved** You can also add the nature of data you are providing to assistant.
-* Define the topic or scope of your dataset, like "financial report", "academic paper", or "incident report". For example, for technical support you might add *"You answer queries using information from similar incidents in the retrieved documents."*.
-* If your data has certain characteristics, you can add these details to the system message. For example, if your documents are in Japanese, you can add *"You retrieve Japanese documents and you should read them carefully in Japanese and answer in Japanese."*.
-* If your documents include structured data like tables from a financial report, you can also add this fact into the system prompt. For example, if your data has tables, you might add *"You are given data in form of tables pertaining to financial results and you should read the table line by line to perform calculations to answer user questions."*.
+* Define the topic or scope of your dataset, like "financial report," "academic paper," or "incident report." For example, for technical support you might add *"You answer queries using information from similar incidents in the retrieved documents."*
+* If your data has certain characteristics, you can add these details to the system message. For example, if your documents are in Japanese, you can add *"You retrieve Japanese documents and you should read them carefully in Japanese and answer in Japanese."*
+* If your documents include structured data like tables from a financial report, you can also add this fact into the system prompt. For example, if your data has tables, you might add *"You are given data in form of tables pertaining to financial results and you should read the table line by line to perform calculations to answer user questions."*
**Define the output style**
-You can also change the model's output by defining a system message. For example, if you want to ensure that the assistant answers are in French, you can add a prompt like *"You are an AI assistant that helps users who understand French find information. The user questions can be in English or French. Please read the retrieved documents carefully and answer them in French. Please translate the knowledge from documents to French to ensure all answers are in French."*.
+You can also change the model's output by defining a system message. For example, if you want to ensure that the assistant answers are in French, you can add a prompt like *"You are an AI assistant that helps users who understand French find information. The user questions can be in English or French. Please read the retrieved documents carefully and answer them in French. Please translate the knowledge from documents to French to ensure all answers are in French."*
**Reaffirm critical behavior**
-Azure OpenAI On Your Data works by sending instructions to a large language model in the form of prompts to answer user queries using your data. If there is a certain behavior that is critical to the application, you can repeat the behavior in system message to increase its accuracy. For example, to guide the model to only answer from documents, you can add "*Please answer using retrieved documents only, and without using your knowledge. Please generate citations to retrieved documents for every claim in your answer. If the user question cannot be answered using retrieved documents, please explain the reasoning behind why documents are relevant to user queries. In any case, don't answer using your own knowledge."*.
+Azure OpenAI On Your Data works by sending instructions to a large language model in the form of prompts to answer user queries using your data. If there is a certain behavior that is critical to the application, you can repeat the behavior in system message to increase its accuracy. For example, to guide the model to only answer from documents, you can add "*Please answer using retrieved documents only, and without using your knowledge. Please generate citations to retrieved documents for every claim in your answer. If the user question cannot be answered using retrieved documents, please explain the reasoning behind why documents are relevant to user queries. In any case, don't answer using your own knowledge."*
**Prompt Engineering tricks**
-There are many tricks in prompt engineering that you can try to improve the output. One example is chain-of-thought prompting where you can add *"LetΓÇÖs think step by step about information in retrieved documents to answer user queries. Extract relevant knowledge to user queries from documents step by step and form an answer bottom up from the extracted information from relevant documents."*.
+There are many tricks in prompt engineering that you can try to improve the output. One example is chain-of-thought prompting where you can add *"LetΓÇÖs think step by step about information in retrieved documents to answer user queries. Extract relevant knowledge to user queries from documents step by step and form an answer bottom up from the extracted information from relevant documents."*
> [!NOTE] > The system message is used to modify how GPT assistant responds to a user question based on retrieved documentation. It doesn't affect the retrieval process. If you'd like to provide instructions for the retrieval process, it is better to include them in the questions.
This means the storage account isn't accessible with the given credentials. In t
### 503 errors when sending queries with Azure AI Search
-Each user message can translate to multiple search queries, all of which get sent to the search resource in parallel. This can produce throttling behavior when the number of search replicas and partitions is low. The maximum number of queries per second that a single partition and single replica can support may not be sufficient. In this case, consider increasing your replicas and partitions, or adding sleep/retry logic in your application. See the [Azure AI Search documentation](../../../search/performance-benchmarks.md) for more information.
+Each user message can translate to multiple search queries, all of which get sent to the search resource in parallel. This can produce throttling behavior when the number of search replicas and partitions is low. The maximum number of queries per second that a single partition and single replica can support might not be sufficient. In this case, consider increasing your replicas and partitions, or adding sleep/retry logic in your application. See the [Azure AI Search documentation](../../../search/performance-benchmarks.md) for more information.
## Regional availability and model support
ai-services Monitor Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/monitor-openai.md
Azure OpenAI provides out-of-box dashboards for each of your Azure OpenAI resour
:::image type="content" source="../media/monitoring/dashboard.png" alt-text="Screenshot that shows out-of-box dashboards for an Azure OpenAI resource in the Azure portal." lightbox="../media/monitoring/dashboard.png" border="false":::
-The dashboards are grouped into four categories: **HTTP Requests**, **Tokens-Based Usage**, **PTU Utilization**, and **Fine-tuning**
+The dashboards are grouped into four categories: **HTTP Requests**, **Tokens-Based Usage**, **PTU Utilization**, and **Fine-tuning**.
## Data collection and routing in Azure Monitor
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 08/16/2024 Last updated : 08/21/2024
Global Standard deployments use Azure's global infrastructure, dynamically routi
The Usage Limit determines the level of usage above which customers might see larger variability in response latency. A customerΓÇÖs usage is defined per model and is the total tokens consumed across all deployments in all subscriptions in all regions for a given tenant. > [!NOTE]
-> Usage tiers only apply to standard and global standard deployment types. Usage tiers do not apply to global batch deployments.
+> Usage tiers only apply to standard and global standard deployment types. Usage tiers do not apply to global batch and provisioned throughput deployments.
#### GPT-4o global standard & standard
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
Previously updated : 03/04/2024 Last updated : 08/21/2024 recommendations: false zone_pivot_groups: openai-use-your-data
zone_pivot_groups: openai-use-your-data
[Reference](https://platform.openai.com/docs/api-reference?lang=python) | [Source code](https://github.com/openai/openai-python) | [Package (pypi)](https://pypi.org/project/openai/) | [Samples](https://github.com/openai/openai-cookbook/)
-The links above reference the OpenAI API for Python. There is no Azure-specific OpenAI Python SDK. [Learn how to switch between the OpenAI services and Azure OpenAI services](/azure/ai-services/openai/how-to/switching-endpoints).
+These links reference the OpenAI API for Python. There's no Azure-specific OpenAI Python SDK. [Learn how to switch between the OpenAI services and Azure OpenAI services](/azure/ai-services/openai/how-to/switching-endpoints).
::: zone-end
The links above reference the OpenAI API for Python. There is no Azure-specific
::: zone-end
-In this quickstart you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication.
+In this quickstart, you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication.
## Prerequisites
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
::: zone pivot="programming-language-javascript" -- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
+- [Long Term Support (LTS) versions of Node.js](https://github.com/nodejs/release#release-schedule)
::: zone-end
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
Title: 'Speech to text with Azure OpenAI Service'
+ Title: Convert speech to text with Azure OpenAI Service
-description: Use the Azure OpenAI Whisper model for speech to text.
+description: Learn how to use the Azure OpenAI Whisper model for speech to text conversion.
Previously updated : 3/19/2024 Last updated : 8/09/2024
zone_pivot_groups: openai-whisper
# Quickstart: Speech to text with the Azure OpenAI Whisper model
-In this quickstart, you use the Azure OpenAI Whisper model for speech to text.
+This quickstart explains how to use the [Azure OpenAI Whisper model](../speech-service/whisper-overview.md) for speech to text conversion. The Whisper model can transcribe human speech in numerous languages, and it can also translate other languages into English.
-The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.
+The file size limit for the Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.
+
+> [!NOTE]
+> The OpenAI Whisper model is currently in Limited Access Public Preview.
## Prerequisites -- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- An Azure OpenAI resource with a `whisper` model deployed in a supported region. [Whisper model regional availability](./concepts/models.md#whisper-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md).
+- An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
+- An Azure OpenAI resource with a Whisper model deployed in a [supported region](./concepts/models.md#whisper-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md).
## Set up ### Retrieve key and endpoint
-To successfully make a call against Azure OpenAI, you'll need an **endpoint** and a **key**.
+To successfully make a call against Azure OpenAI, you need an *endpoint* and a *key*.
|Variable name | Value | |--|-|
If you want to clean up and remove an Azure OpenAI resource, you can delete the
## Next steps
-* Learn more about how to work with Whisper models with the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md) API.
-* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples)
+* To learn how to convert audio data to text in batches, see [Create a batch transcription](../speech-service/batch-transcription-create.md).
+* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples).
ai-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/what-is-personalizer.md
Title: What is Personalizer? description: Personalizer is a cloud-based service that allows you to choose the best experience to show to your users, learning from their real-time behavior.--++ ms.
ai-studio Evaluation Metrics Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md
For groundedness, we provide two versions:
| When to use it? | Use the groundedness metric when you need to verify that AI-generated responses align with and are validated by the provided context. It's essential for applications where factual correctness and contextual accuracy are key, like information retrieval, question-answering, and content summarization. This metric ensures that the AI-generated answers are well-supported by the context. | | What does it need as input? | Question, Context, Generated Answer |
-Built-in prompt used by Large Language Model judge to score this metric:
+Built-in prompt used by the Large Language Model judge to score this metric:
``` You will be presented with a CONTEXT and an ANSWER about that CONTEXT. You need to decide whether the ANSWER is entailed by the CONTEXT by choosing one of the following rating:
Note the ANSWER is generated by a computer system, it can contain certain symbol
| What does it need as input? | Question, Context, Generated Answer |
-Built-in prompt used by Large Language Model judge to score this metric (For question answering data format):
+Built-in prompt used by the Large Language Model judge to score this metric (For question answering data format):
``` Relevance measures how well the answer addresses the main aspects of the question, based on the context. Consider whether all and only the important aspects are contained in the answer when evaluating relevance. Given the context and question, score the relevance of the answer between one to five stars using the following rating scale:
Five stars: the answer has perfect relevance
This rating value should always be an integer between 1 and 5. So the rating produced should be 1 or 2 or 3 or 4 or 5. ```
-Built-in prompt used by Large Language Model judge to score this metric (For conversation data format) (without Ground Truth available):
+Built-in prompt used by the Large Language Model judge to score this metric (For conversation data format) (without Ground Truth available):
``` You will be provided a question, a conversation history, fetched documents related to the question and a response to the question in the {DOMAIN} domain. Your task is to evaluate the quality of the provided response by following the steps below:
You will be provided a question, a conversation history, fetched documents relat
- Your final response must include both the reference answer and the evaluation result. The evaluation result should be written in English.  ```
-Built-in prompt used by Large Language Model judge to score this metric (For conversation data format) (with Ground Truth available):
+Built-in prompt used by the Large Language Model judge to score this metric (For conversation data format) (with Ground Truth available):
```
Labeling standards are as following:
| When to use it? | Use it when assessing the readability and user-friendliness of your model's generated responses in real-world applications. | | What does it need as input? | Question, Generated Answer |
-Built-in prompt used by Large Language Model judge to score this metric:
+Built-in prompt used by the Large Language Model judge to score this metric:
``` Coherence of an answer is measured by how well all the sentences fit together and sound naturally as a whole. Consider the overall quality of the answer when evaluating coherence. Given the question and answer, score the coherence of answer between one to five stars using the following rating scale:
This rating value should always be an integer between 1 and 5. So the rating pro
| When to use it? | Use it when evaluating the linguistic correctness of the AI-generated text, ensuring that it adheres to proper grammatical rules, syntactic structures, and vocabulary usage in the generated responses. | | What does it need as input? | Question, Generated Answer |
-Built-in prompt used by Large Language Model judge to score this metric:
+Built-in prompt used by the Large Language Model judge to score this metric:
``` Fluency measures the quality of individual sentences in the answer, and whether they are well-written and grammatically correct. Consider the quality of individual sentences when evaluating fluency. Given the question and answer, score the fluency of the answer between one to five stars using the following rating scale:
This rating value should always be an integer between 1 and 5. So the rating pro
| When to use it? | Use the retrieval score when you want to guarantee that the documents retrieved are highly relevant for answering your users' questions. This score helps ensure the quality and appropriateness of the retrieved content. | | What does it need as input? | Question, Context, Generated Answer |
-Built-in prompt used by Large Language Model judge to score this metric:
+Built-in prompt used by the Large Language Model judge to score this metric:
``` A chat history between user and bot is shown below
Think through step by step:
-Built-in prompt used by Large Language Model judge to score this metric:
+Built-in prompt used by the Large Language Model judge to score this metric:
``` GPT-Similarity, as a metric, measures the similarity between the predicted answer and the correct answer. If the information and content in the predicted answer is similar or equivalent to the correct answer, then the value of the Equivalence metric should be high, else it should be low. Given the question, correct answer, and predicted answer, determine the value of Equivalence metric using the following rating scale:
ai-studio Copilot Sdk Build Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-build-rag.md
In this part one, you learn how to:
> [!IMPORTANT] > This tutorial builds on the code and environment you set up in the quickstart. -- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.
+- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/tree/main/tutorial/data) to your local machine.
## Application code structure
You need to set environment variables for the Azure AI Search service and connec
If you don't have an Azure AI Search index already created, we walk through how to create one. If you already have an index to use, you can skip to the [set the search environment variables](#set-search-environment-variables) section. The search index is created on the Azure AI Search service that was either created or referenced in the previous step.
-1. Use your own data or [download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine. Unzip the file into your **rag-tutorial** folder. This data is a collection of markdown files that represent product information. The data is structured in a way that is easy to ingest into a search index. You build a search index from this data.
+1. Use your own data or [download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/tree/main/tutorial/data) to your local machine. Unzip the file into your **rag-tutorial** folder. This data is a collection of markdown files that represent product information. The data is structured in a way that is easy to ingest into a search index. You build a search index from this data.
1. The prompt flow RAG package allows you to ingest the markdown files, locally create a search index, and register it in the cloud project. Install the prompt flow RAG package:
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
The following table lists all the upcoming breaking changes and feature retireme
| [Self-hosted gateway v0/v1 retirement][shgwv0v1] | October 1, 2023 | | [API version retirements][api2023] | June 1, 2024 | | [Workspaces preview breaking changes][workspaces2024] | June 14, 2024 |
-| [stv1 platform retirement][stv12024] | August 31, 2024 |
-| [Workspaces preview breaking changes, part 2][workspaces2025march] | March 31, 2025 |
+| [stv1 platform retirement - Global Azure][stv12024] | August 31, 2024 |
+| [stv1 platform retirement - Azure Government, Azure in China][stv1sov2025] | February 28, 2025 |
| [Git repository retirement][git2025] | March 15, 2025 | | [Direct management API retirement][mgmtapi2025] | March 15, 2025 |
+| [Workspaces preview breaking changes, part 2][workspaces2025march] | March 31, 2025 |
| [ADAL-based Microsoft Entra ID or Azure AD B2C identity provider retirement][msal2025] | September 30, 2025 | | [CAPTCHA endpoint update][captcha2025] | September 30, 2025 | | [Built-in analytics dashboard retirement][analytics2027] | March 15, 2027 |
The following table lists all the upcoming breaking changes and feature retireme
[devportal2023]: ../api-management-customize-styles.md [shgwv0v1]: ./self-hosted-gateway-v0-v1-retirement-oct-2023.md [stv12024]: ./stv1-platform-retirement-august-2024.md
+[stv1sov2025]: ./stv1-platform-retirement-sovereign-clouds-february-2025.md
[msal2025]: ./identity-provider-adal-retirement-sep-2025.md [captcha2025]: ./captcha-endpoint-change-sep-2025.md [metrics2023]: ./metrics-retirement-aug-2023.md
api-management Stv1 Platform Retirement August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md
Title: Azure API Management - stv1 platform retirement (August 2024) | Microsoft Docs
-description: Azure API Management will retire the stv1 compute platform effective 31 August 2024. Instances hosted on the stv1 platform must be migrated to the stv2 platform.
+ Title: Azure API Management - global Azure - stv1 platform retirement (August 2024)
+description: In the global Azure cloud, Azure API Management will retire stv1 compute platform effective 31 August 2024. Instances must be migrated to stv2 platform.
Previously updated : 12/19/2023 Last updated : 08/08/2024
-# stv1 platform retirement (August 2024)
+# API Management stv1 platform retirement - Global Azure cloud (August 2024)
[!INCLUDE [api-management-availability-premium-dev-standard-basic](../../../includes/api-management-availability-premium-dev-standard-basic.md)]
-As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many details of the infrastructure used to host and run your service. **The infrastructure associated with the API Management `stv1` compute platform version will be retired effective 31 August 2024.** A more current compute platform version (`stv2`) is already available, and provides enhanced service capabilities.
+As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many details of the infrastructure used to host and run your service. **The infrastructure associated with the API Management `stv1` compute platform version will be retired effective 31 August 2024 in the global Microsoft Azure cloud.** A more current compute platform version (`stv2`) is already available, and provides enhanced service capabilities.
+
+> [!NOTE]
+> For API Management instances deployed in Microsoft Azure Government cloud or Microsoft Azure operated by 21Vianet cloud (Azure in China), the retirement date for the `stv1` platform is 28 February 2025. [Learn more](stv1-platform-retirement-sovereign-clouds-february-2025.md)
The following table summarizes the compute platforms currently used for instances in the different API Management service tiers.
Support for API Management instances hosted on the `stv1` platform will be retir
> [!WARNING] > If your instance is currently hosted on the `stv1` platform, you must migrate to the `stv2` platform. Failure to migrate by the retirement date might result in loss of the environments running APIs and all configuration data. - ## What do I need to do? **Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 31 August 2024.**
api-management Stv1 Platform Retirement Sovereign Clouds February 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-sovereign-clouds-february-2025.md
+
+ Title: Azure API Management - stv1 platform retirement - Azure Government, Azure in China (February 2025)
+description: In Azure Government and Azure operated by 21Vianet, API Management will retire stv1 platform effective 28 February 2025. Instances must be migrated to stv2 platform.
++++ Last updated : 08/09/2024+++
+# API Management stv1 platform retirement - Azure Government and Azure operated by 21Vianet (February 2025)
++
+As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many details of the infrastructure used to host and run your service. **The infrastructure associated with the API Management `stv1` compute platform version will be retired effective 28 February 2025 in Microsoft Azure Government and in Microsoft Azure operated by 21 Vianet (Azure in China).** A more current compute platform version (`stv2`) is already available, and provides enhanced service capabilities.
+
+> [!NOTE]
+> For API Management instances deployed in global Microsoft Azure, the retirement date for the `stv1` platform is 31 August 2024. [Learn more](stv1-platform-retirement-august-2024.md)
+
+The following table summarizes the compute platforms currently used for instances in the different API Management service tiers.
+
+| Version | Description | Architecture | Tiers |
+| -| -| -- | - |
+| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium |
+| `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium |
+| `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption |
+
+**For continued support and to take advantage of upcoming features, customers must [migrate](../migrate-stv1-to-stv2.md) their Azure API Management instances from the `stv1` compute platform to the `stv2` compute platform.** The `stv2` compute platform comes with additional features and improvements such as support for Azure Private Link and other networking features.
+
+New instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform already. Existing instances on the `stv1` compute platform will continue to work normally until the retirement date, but those instances wonΓÇÖt receive the latest features available to the `stv2` platform.
+
+## Is my service affected by this?
+
+If the value of the `platformVersion` property of your service is `stv1`, it's hosted on the `stv1` platform. See [How do I know which platform hosts my API Management instance?](../compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance)
+
+## What is the deadline for the change?
+
+In Azure Government and Azure operated by 21Vianet, support for API Management instances hosted on the `stv1` platform will be retired by 28 February 2025.
+
+## What do I need to do?
+
+**Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 28 February 2025.**
+
+If you have existing instances hosted on the `stv1` platform, follow our **[migration guide](../migrate-stv1-to-stv2.md)** to ensure a successful migration.
+++
+## Related content
+
+* [Migrate from stv1 platform to stv2](../migrate-stv1-to-stv2.md)
+* See all [upcoming breaking changes and feature retirements](overview.md).
api-management Llm Semantic Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-semantic-cache-lookup-policy.md
Use the `llm-semantic-cache-lookup` policy to perform cache lookup of responses
### Example with corresponding llm-semantic-cache-store policy ## Related policies
api-management Llm Semantic Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-semantic-cache-store-policy.md
The `llm-semantic-cache-store` policy caches responses to chat completion API an
### Example with corresponding llm-semantic-cache-lookup policy ## Related policies
api-management Migrate Stv1 To Stv2 No Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-no-vnet.md
You can choose whether the virtual IP address of API Management will change, or
#### [Azure CLI](#tab/cli)
-Run the following Azure CLI commands, setting variables where indicated with the name of your API Management instance and the name of the resource group in which it was created.
-> [!NOTE]
-> The Migrate to `stv2` REST API is available starting in API Management REST API version `2022-04-01-preview`.
-> [!NOTE]
-> The following script is written for the bash shell. To run the script in PowerShell, prefix the variable names with the `$` character. Example: `$APIM_NAME`.
-
-```azurecli
-APIM_NAME={name of your API Management instance}
-RG_NAME={name of your resource group}
-# Get resource ID of API Management instance
-APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv)
-# Call REST API to migrate to stv2 and change VIP address
-az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "NewIp"}'
-# Alternate call to migrate to stv2 and preserve VIP address
-# az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}'
-```
[!INCLUDE [api-management-validate-migration-to-stv2](../../includes/api-management-validate-migration-to-stv2.md)]
api-management Migrate Stv1 To Stv2 Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-vnet.md
Previously updated : 06/18/2024 Last updated : 08/19/2024
[!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)]
-This article provides steps to migrate an API Management instance hosted on the `stv1` compute platform in-place to the `stv2` platform when the instance is injected (deployed) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet. For this scenario, migrate your instance by updating the VNet configuration settings. [Find out if you need to do this](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance).
+This article provides steps to migrate an API Management instance hosted on the `stv1` compute platform in-place to the `stv2` platform when the instance is injected (deployed) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet. [Find out if you need to do this](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance).
+
+For a VNet-inject instance, you have the following migration options:
+
+* [**Option 1: Keep the same subnet**](#option-1-migrate-and-keep-same-subnet) - Migrate the instance in-place and keep the instances's existing subnet configuration. You can choose whether the API Management instance's original VIP address is preserved (recommended) or whether a new VIP address will be generated. Currently, the [Migrate to Stv2](/rest/api/apimanagement/api-management-service/migratetostv2) REST API supports migrating the instance using the same subnet configuration.
+
+* [**Option 2: Change to a new subnet**](#option-2-migrate-and-change-to-new-subnet) - Migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet. The migration process changes the VIP address(es) of the instance. After migration, you need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address(es). Currently, the **Platform migration** blade in the Azure portal supports this migration option.
If you need to migrate a *non-VNnet-injected* API Management hosted on the `stv1` platform, see [Migrate a non-VNet-injected API Management instance to the stv2 platform](migrate-stv1-to-stv2-no-vnet.md). [!INCLUDE [api-management-migration-alert](../../includes/api-management-migration-alert.md)] + > [!CAUTION] > * Migrating your API Management instance to the `stv2` platform is a long-running operation.
-> * The VIP address of your instance will change. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address. Plan your migration accordingly.
> * Migration to `stv2` is not reversible. + ## What happens during migration? API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer. * The upgrade process involves creating a new compute in parallel to the old compute, which can take up to 45 minutes. * The API Management status in the Azure portal will be **Updating**.
-* The VIP address (or addresses, for a multi-region deployment) of the instance will change.
-* Azure manages the management endpoint DNS, and updates to the new compute immediately on successful migration.
-* The gateway DNS still points to the old compute if a custom domain is in use.
-* If custom DNS isn't in use, the gateway and portal DNS points to the new compute immediately.
-* For an instance in internal VNet mode, customer manages the DNS, so the DNS entries continue to point to old compute until updated by the customer.
-* It's the DNS that points to either the new or the old compute and hence no downtime to the APIs.
-* Changes are required to your firewall rules, if any, to allow the new compute subnet to reach the backends.
-* After successful migration, the old compute is automatically decommissioned after approximately 15 minutes by default. You can enable a migration setting to retain the old gateway for 48 hours. *The 48 hour delay option is only available for VNet-injected services.*
+* For certain migration options, the VIP address (or addresses, for a multi-region deployment) of the instance will change. If you migrate and keep the same subnet configuration, you can choose to preserve the VIP address or a new public VIP will be generated.
+* For migration scenarios when a new VIP address is generated:
+ * Azure manages the migration.
+ * The gateway DNS still points to the old compute if a custom domain is in use.
+ * If custom DNS isn't in use, the gateway and portal DNS points to the new compute immediately.
+ * For an instance in internal VNet mode, customer manages the DNS, so the DNS entries continue to point to old compute until updated by the customer.
+ * It's the DNS that points to either the new or the old compute and hence no downtime to the APIs.
+ * Changes are required to your firewall rules, if any, to allow the new compute subnet to reach the backends.
+ * After successful migration, the old compute is automatically decommissioned after a short period. Using the **Platform migration** blade in the portal, you can enable a migration setting to retain the old gateway for 48 hours. *The 48 hour delay option is only available for VNet-injected services.*
## Prerequisites
-* An API Management instance hosted on the `stv1` compute platform. To confirm that your instance is hosted on the `stv1` platform, see [How do I know which platform hosts my API Management instance?](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) The instance must be injected in a virtual network.
+* An API Management instance hosted on the `stv1` compute platform. To confirm that your instance is hosted on the `stv1` platform, see [How do I know which platform hosts my API Management instance?](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance)
+* The instance must currently be deployed in an external or internal VNet.
-* A new subnet in the current virtual network, in each region where the API Management instance is deployed. (Alternatively, set up a subnet in a different virtual network in the same regions and subscription as your API Management instance). A network security group must be attached to the subnet, and [NSG rules](api-management-using-with-vnet.md#configure-nsg-rules) for API Management must be configured.
+Other prerequisites are specific to the migration options in the following sections.
-* (Optional) A new Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region(s) and subscription as your API Management instance. For details, see [Prerequisites for network connections](api-management-using-with-vnet.md#prerequisites).
+## Option 1: Migrate and keep same subnet
- [!INCLUDE [api-management-publicip-internal-vnet](../../includes/api-management-publicip-internal-vnet.md)]
+You can migrate your API Management instance to the `stv2` platform keeping the existing subnet configuration, which simplifies your migration. Currently, you can use the Migrate to Stv2 REST API to migrate the instance using the same subnet configuration.
-## Trigger migration of a network-injected API Management instance
+### Prerequisites
-Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration to use new network settings in each region where the instance is deployed. After that update completes, as an optional step, you can migrate back to the original VNets and subnets you used.
+* A network security group must be attached to each subnet, and [NSG rules](virtual-network-reference.md#required-ports) for API Management on the `stv2` platform must be configured. The following are minimum connectivity settings:
-You can also migrate to the `stv2` platform by enabling [zone redundancy](../reliability/migrate-api-mgt.md), available in the **Premium** tier.
+ * Outbound to Azure Storage over port 443
+ * Outbound to Azure SQL over port 1433
+ * Outbound to Azure Key Vault over port 443
+ * Inbound from Azure Load Balancer over port 6390
+ * Inbound from ApiManagement service tag over port 3443
+ * Inbound over port 80/443 for clients calling API Management service
+ * The subnet must have service endpoints enabled for Azure Storage, Azure SQL, and Azure Key Vault
+* The address space of each existing subnet must be large enough to host a copy of your existing service side-by side with your existing service during migration.
+* Other network considerations:
+ * Turn off any autoscale rules configured for API Management instances deployed in the subnet. Autoscale rules can interfere with the migration process.
+ * If you have multiple API Management instances in the same subnet, migrate each instance in sequence. We recommend that you promptly migrate all instances in the subnet to avoid any potential issues with instances hosted on different platforms in the same subnet.
-> [!IMPORTANT]
-> The VIP address(es) of your API Management instance will change. However, API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and peered VNets to use the new VIP address(es).
+
+### Public IP address options - same-subnet migration
+
+You can choose whether the API Management instance's original VIP address is preserved (recommended) or whether a new VIP address will be generated.
+
+* **Preserve virtual IP address** - If you preserve the VIP address in a VNet in external mode, API requests can remain responsive during migration (see [Expected downtime](#expected-downtime-and-compute-retention)); for a VNet in internal mode, temporary downtime is expected. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration.
+
+ With this option, the `stv1` compute is deleted permanently after the migration is complete. There is no option to retain it temporarily.
+
+* **New virtual IP address** - If you choose this option, API Management generates a new VIP address for your instance. API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+
+ With this option, the `stv1` compute is retained for a period by default after migration is complete so that you can validate the migrated instance and confirm the network and DNS configuration.
++
+### Precreated IP address for migration
+
+API Management precreates a public IP address for the migration process. Find the precreated IP address in the JSON output of your API Management instance's properties. Under `customProperties`, the precreated IP address is the value of the `Microsoft.WindowsAzure.ApiManagement.Stv2MigrationPreCreatedIps` property. For a multi-region deployment, the value is a comma-separated list of precreated IP addresses.
+
+Use the precreated IP address (or addresses) to help you manage the migration process:
+
+* When you migrate and preserve the VIP address, the precreated IP address is assigned temporarily to the new `stv2` deployment, before the original IP address is assigned to the `stv2` deployment. If you have firewall rules limiting access to the API Management instance, for example, you can add the precreated IP address to the allowlist to preserve continuity of client access during migration. After migration is complete, you can remove the precreated IP address from your allowlist.
+* When you migrate and generate a new VIP address, the precreated IP address is assigned to the new `stv2` deployment during migration and persists after migration is complete. Use the precreated IP address to update your network dependencies, such as DNS and firewall rules, to point to the new IP address.
+
+### Expected downtime and compute retention
-### Update network configuration
+When migrating a VNet-injected instance and keeping the same subnet configuration, minimal or no downtime for the API gateway is expected. The following table summarizes the expected downtime and `stv1` compute retention for each migration scenario when keeping the same subnet:
-You can use the Azure portal to migrate to a new subnet in the same or a different VNet. The following image shows a high level overview of what happens during migration to a new subnet.
+|VNet mode |Public IP option |Expected downtime | `stv1` compute retention |
+||||--|
+|External | Preserve VIP | No downtime; traffic is served on a temporary IP address for up to 20 minutes during migration to the new `stv2` deployment | No retention |
+|External | New VIP | No downtime | Retained by default for 15 minutes to allow you to update network dependencies |
+|Internal | Preserve VIP | Downtime for approximately 20 minutes during migration while the existing IP address is assigned to the new `stv2` deployment. | No retention |
+|Internal | New VIP | No downtime | Retained by default for 4 hours to allow you to update network dependencies |
++
+### Migration script
+
+> [!NOTE]
+> If your API Management instance is deployed in multiple regions, the REST API migrates the VNet settings for all locations of your instance using a single call.
++++
+## Option 2: Migrate and change to new subnet
+
+Using the Azure portal, you can migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet.
+
+The following image shows a high level overview of what happens during migration to a new subnet.
:::image type="content" source="media/migrate-stv1-to-stv2-vnet/inplace-new-subnet.gif" alt-text="Diagram of in-place migration to a new subnet.":::
-#### [Portal](#tab/portal)
+### Prerequisites
+
+* A new subnet in the current virtual network, in each region where the API Management instance is deployed. (Alternatively, set up a subnet in a different virtual network in the same regions and subscription as your API Management instance). A network security group must be attached to each subnet, and [NSG rules](virtual-network-reference.md#required-ports) for API Management on the `stv2` platform must be configured.
+
+* (Optional) A new Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region(s) and subscription as your API Management instance. For details, see [Prerequisites for network connections](virtual-network-injection-resources.md).
++
+### Migration steps
1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. In the left menu, under **Settings**, select **Platform migration**.
You can use the Azure portal to migrate to a new subnet in the same or a differe
:::image type="content" source="media/migrate-stv1-to-stv2-vnet/select-location.png" alt-text="Screenshot of selecting network migration settings in the portal." lightbox="media/migrate-stv1-to-stv2-vnet/select-location.png":::
- * Select either **Return to original subnet as soon as possible** or **Stay in the new subnet and keep stv1 compute around for 48 hours** after migration. If you choose the former, the `stv1` compute will be deleted approximately 15 minutes after migration, allowing you to proceed directly with migration back to the original subnet if desired. If you choose the latter, the `stv1` compute is retained for 48 hours. You can use this period to validate your network settings and connectivity.
+ * Select either **Return to original subnet as soon as possible** or **Stay in the new subnet and keep stv1 compute around for 48 hours** after migration. If you choose the former, the `stv1` compute will be deleted approximately 15 minutes after migration, allowing you to proceed directly with [manual migration back to the original subnet](#optional-migrate-back-to-original-subnet) if desired. If you choose the latter, the `stv1` compute is retained for 48 hours. You can use this period to validate your network settings and connectivity.
- :::image type="content" source="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway-small.png" alt-text="Screenshot of options to retain stv1 compute in the portal." lightbox="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway.png":::
+ :::image type="content" source="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway.png" alt-text="Screenshot of options to retain stv1 compute in the portal." lightbox="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway.png":::
1. In **Step 3**, confirm you want to migrate, and select **Migrate**. The status of your API Management instance changes to **Updating**. The migration process takes approximately 45 minutes to complete. When the status changes to **Online**, migration is complete. If your API Management instance is deployed in multiple regions, repeat the preceding steps to continue migrating VNet settings for the remaining locations of your instance. -
-## (Optional) Migrate back to original subnet
+### (Optional) Migrate back to original subnet
-You can optionally migrate back to the original subnet you used in each region after migration to the `stv2` platform. To do so, update the VNet configuration again, this time specifying the original VNet and subnet in each region. As in the preceding migration, expect a long-running operation, and expect the VIP address to change.
+If you migrated and changed to a new subnet, optionally migrate back to the original subnet you used in each region. To do so, update the VNet configuration again, this time specifying the original VNet and subnet in each region. As in the preceding migration, expect a long-running operation, and expect the VIP address to change.
The following image shows a high level overview of what happens during migration back to the original subnet.
The following image shows a high level overview of what happens during migration
> If the VNet and subnet are locked (because other `stv1` platform-based API Management instances are deployed there) or the resource group where the original VNet is deployed has a [resource lock](../azure-resource-manager/management/lock-resources.md), make sure to remove the lock before migrating back to the original subnet. Wait for lock removal to complete before attempting the migration to the original subnet. [Learn more](api-management-using-with-internal-vnet.md#challenges-encountered-in-reassigning-api-management-instance-to-previous-subnet).
-### Additional prerequisites
+#### Additional prerequisites
-* The unlocked original subnet, in each region where the API Management instance is deployed. A network security group must be attached to the subnet, and [NSG rules](api-management-using-with-vnet.md#configure-nsg-rules) for API Management must be configured.
+* The unlocked original subnet, in each region where the API Management instance is deployed. A network security group must be attached to the subnet, and [NSG rules](virtual-network-reference.md#required-ports) for API Management must be configured.
* (Optional) A new Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region(s) and subscription as your API Management instance. [!INCLUDE [api-management-publicip-internal-vnet](../../includes/api-management-publicip-internal-vnet.md)]
-### Update VNet configuration
+#### Update VNet configuration
1. In the [portal](https://portal.azure.com), navigate to your original VNet. 1. In the left menu, select **Subnets**, and then the original subnet.
After you update the VNet configuration, the status of your API Management insta
- **What are the prerequisites for the migration?**
- For VNet-injected instances, you'll need a new subnet to migrate in each VNet (either external or internal mode). In external mode, optionally supply a public IP address resource. The subnet must have an NSG attached to it following the rules for `stv2` platform as described [here](./api-management-using-with-vnet.md?tabs=stv2#configure-nsg-rules).
+ For VNet-injected instances, see the prerequisites for the options to [migrate and keep the same subnet](#option-1-migrate-and-keep-same-subnet) or to [migrate and change to a new subnet](#option-2-migrate-and-change-to-new-subnet).
- **Will the migration cause a downtime?**
- Since the old gateway is purged only after the new compute is healthy and online, there shouldn't be any downtime if default hostnames are in use. It's critical that all network dependencies are taken care of upfront, for the impacted APIs to be functional. However, if custom domains are in use, they'll be pointing to the purged compute until they're updated which may cause a downtime. Alternatively, enable a migration setting to retain the old gateway for 48 hours. Having the old and the new compute coexist will facilitate validation, and then you can update the custom DNS entries at will.
+ When migrating a VNet-injected instance and keeping the same subnet configuration, minimal or no downtime for the API gateway is expected. See the summary table in [Expected downtime](#expected-downtime-and-compute-retention).
+
+ When migrating and changing to a new VIP address, there shouldn't be any downtime if default hostnames are in use. It's critical that all network dependencies are taken care of upfront, for the impacted APIs to be functional. However, if custom domains are in use, they'll be pointing to the purged compute until they're updated which may cause a downtime. Alternatively, for certain migration options, enable a migration setting to retain the old gateway for 48 hours. Having the old and the new compute coexist will facilitate validation, and then you can update the custom DNS entries at will.
- **My traffic is force tunneled through a firewall. What changes are required?**
- - First of all, make sure that the new subnet(s) you created for the migration retains the following configuration (they should be already configured in your current subnet):
+ - First of all, make sure that the subnet(s) you use for the migration retain the following configuration (they should be already configured if you are migrating and keeping your current subnet):
- Enable service endpoints as described [here](./api-management-using-with-vnet.md?tabs=stv2#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance) - The UDR (user-defined route) has the hop from **ApiManagement** service tag set to "Internet" and not only to your firewall address
- - The [requirements for NSG configuration for stv2](./api-management-using-with-vnet.md?tabs=stv2#configure-nsg-rules) remain the same whether you have firewall or not; make sure your new subnet has it
+ - The [requirements for NSG configuration for stv2](virtual-network-reference.md#required-ports) remain the same whether you have firewall or not; make sure your new subnet has it
- Firewall rules referring to the current IP address range of the API Management instance should be updated to use the IP address range of your new subnet. - **Can data or configuration losses occur by/during the migration?**
After you update the VNet configuration, the status of your API Management insta
- **Can I do the migration using the portal?**
- Yes, VNet-injected instances can be migrated by changing the subnet configuration(s) in the **Platform migration** blade.
+ Yes, VNet-injected instances can be migrated by using the **Platform migration** blade.
- **Can I preserve the IP address of the instance?**
- There's no way currently to preserve the IP address if your instance is injected into a VNet.
+ Yes, you can preserve the IP address by [migrating and keeping the same subnet](#option-1-migrate-and-keep-same-subnet).
- **Is there a migration path without modifying the existing instance?**-
+
Yes, you need a [side-by-side migration](migrate-stv1-to-stv2.md#alternative-side-by-side-deployment). That means you create a new API Management instance in parallel with your current instance and copy the configuration over to the new instance. - **What happens if the migration fails?**
After you update the VNet configuration, the status of your API Management insta
- **What functionality is not available during migration?**
- API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+ API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 30 minutes. In scenarios where you migrate to a new subnet, after migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
- **How long will the migration take?**
After you update the VNet configuration, the status of your API Management insta
- **Is there a way to validate the VNet configuration before attempting migration?**
- You can deploy a new API Management instance with the new VNet, subnet, and (optional) IP address resource that you use for the actual migration. Navigate to the **Network status** page after the deployment is completed, and verify if every endpoint connectivity status is green. If yes, you can remove this new API Management instance and proceed with the real migration with your original `stv1`-hosted service.
+ If you plan to change subnet during migration, you can deploy a new API Management instance with the VNet, subnet, and (optional) IP address resource that you will use for the actual migration. Navigate to the **Network status** page after the deployment is completed, and verify if every endpoint connectivity status is green. If yes, you can remove this new API Management instance and proceed with the real migration with your original `stv1`-hosted service.
- **Can I roll back the migration if required?** If there's a failure during the migration process, the instance will automatically roll back to the `stv1` platform. However, after the service migrates successfully, you can't roll back to the `stv1` platform.
- After a VNet-injected service migrates successfully, there is a short window if time during which the old gateway continues to serve traffic and you can confirm your network settings. See [Confirm settings before old gateway is purged](#confirm-settings-before-old-gateway-is-purged)
+ When migrating and changing to a new VIP, after migration there is a short window if time during which the old gateway continues to serve traffic and you can confirm your network settings. See [Confirm settings before old gateway is purged](#confirm-settings-before-old-gateway-is-purged)
- **Is there any change required in custom domain/private DNS zones?**
- With VNet-injected instances in internal mode, you'll need to update the private DNS zones to the new VNet IP address acquired after the migration. Pay attention to update non-Azure DNS zones, too (for example, your on-premises DNS servers pointing to API Management private IP address). However, in external mode, the migration process will automatically update the default domains if in use.
+ With VNet-injected instances in internal mode and changing to a new VIP, you'll need to update the private DNS zones to the new VNet IP address acquired after the migration. Pay attention to update non-Azure DNS zones, too (for example, your on-premises DNS servers pointing to API Management private IP address). However, in external mode, the migration process will automatically update the default domains if in use.
- **My stv1 instance is deployed to multiple Azure regions (multi-region). How do I upgrade to stv2?**
- Multi-region deployments include more managed gateways deployed in other locations. Migrate each location separately by updating the corresponding network settings - for example, using the **Platform migration** blade. The instance is considered migrated to the new platform only when all the locations are migrated. All regional gateways continue to operate normally throughout the migration process.
+ Multi-region deployments include more managed gateways deployed in other locations. When you migrate using the **Platform migration** blade in the portal, you migrate each location separately. The Migrate to Stv2 REST API migrates all locations in one call. The instance is considered migrated to the new platform only when all the locations are migrated. All regional gateways continue to operate normally throughout the migration process.
- **Can I upgrade my stv1 instance to the same subnet?**
- - You can't migrate the `stv1` instance to the same subnet in a single pass without downtime. However, you can optionally move your migrated instance back to the original subnet. More details [here](#optional-migrate-back-to-original-subnet).
- - The old gateway takes between 15 mins to 45 mins to vacate the subnet, so that you can initiate the move. However, you can enable a migration setting to retain the old gateway for 48 hours.
- - Ensure that the old subnet's networking for [NSG](./api-management-using-with-internal-vnet.md?tabs=stv2#configure-nsg-rules) and [firewall](./api-management-using-with-vnet.md?tabs=stv2#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance) is updated for `stv2` dependencies.
- - Subnet IP address allocation is nondeterministic, therefore the original ILB (ingress) IP for "internal mode" deployments may change when you move back to the original subnet. This would require a DNS change if you're using A records.
-
+ - Currently, you can only upgrade to the same subnet in a single pass when using the [Migrate to stv2 REST API](#option-1-migrate-and-keep-same-subnet).
+
+ Currently, if you use the **Platform migration** blade in the portal, you need to migrate to a new subnet and then migrate back to the original subnet:
+ - The old gateway takes between 15 mins to 45 mins to vacate the subnet, so that you can initiate the move. However, you can enable a migration setting to retain the old gateway for 48 hours.
+ - Ensure that the old subnet's networking for [NSG](./virtual-network-reference.md#required-ports) and [firewall](./api-management-using-with-vnet.md?tabs=stv2#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance) is updated for `stv2` dependencies.
+ - Subnet IP address allocation is nondeterministic, therefore the original ILB (ingress) IP for "internal mode" deployments may change when you move back to the original subnet. This would require a DNS change if you're using A records.
+
- **Can I test the new gateway before switching the live traffic?** - By default, the old and the new managed gateways coexist for 15 mins, which is a small window of time to validate the deployment. You can enable a migration setting to retain the old gateway for 48 hours. This change keeps the old and the new managed gateways active to receive traffic and facilitate validation.
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
Previously updated : 03/14/2024 Last updated : 08/09/2024
Here we help you find guidance to migrate your API Management instance hosted on the `stv1` compute platform to the newer `stv2` platform. [Find out if you need to do this](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance).
-There are two different migration scenarios, depending on whether or not your API Management instance is currently deployed (injected) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet. Choose the migration guide for your scenario. Both scenarios migrate an existing instance in-place to the `stv2` platform.
+There are two different in-place migration scenarios, depending on whether or not your API Management instance is currently deployed (injected) in a VNet. Choose the migration guide for your scenario.
[!INCLUDE [api-management-migration-alert](../../includes/api-management-migration-alert.md)] ## In-place migration scenarios
-* [**Scenario 1: Migrate a non-VNet-injected API Management instance**](migrate-stv1-to-stv2-no-vnet.md) - Migrate your instance to the `stv2` platform using the portal or the [Migrate to stv2](/rest/api/apimanagement/current-ga/api-management-service/migratetostv2) REST API.
+Migrate your instance in-place to the `stv2` platform using the **Platform migration** blade in the portal or the [Migrate to stv2](/rest/api/apimanagement/current-ga/api-management-service/migratetostv2) REST API.
-* [**Scenario 2: Migrate a VNet-injected API Management instance**](migrate-stv1-to-stv2-vnet.md) - Migrate your instance to the `stv2` platform by updating the VNet configuration settings using the portal.
+* [**Scenario 1: Migrate a non-VNet-injected API Management instance**](migrate-stv1-to-stv2-no-vnet.md)
+
+* [**Scenario 2: Migrate a VNet-injected API Management instance**](migrate-stv1-to-stv2-vnet.md)
## Alternative: Side-by-side deployment
app-service App Service Configuration References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configuration-references.md
To get started with using App Configuration references in App Service, you'll fi
1. Create an App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-azure-app-configuration-create.md).
- > [!NOTE]
- > App Configuration references do not yet support network-restricted configuration stores.
- 1. Create a [managed identity](overview-managed-identity.md) for your application. App Configuration references will use the app's system assigned identity by default, but you can [specify a user-assigned identity](#access-app-configuration-store-with-a-user-assigned-identity).
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
With service endpoints, you can configure your app with application gateways or
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-service-tag-add.png?v2" alt-text="Screenshot of the 'Add Restriction' pane with the Service Tag type selected.":::
-All available service tags are supported in access restriction rules. Each service tag represents a list of IP ranges from Azure services. A list of these services and links to the specific ranges can be found in the [service tag documentation][servicetags]. Use Azure Resource Manager templates or scripting to configure more advanced rules like regional scoped rules.
+All publicly available service tags are supported in access restriction rules. Each service tag represents a list of IP ranges from Azure services. A list of these services and links to the specific ranges can be found in the [service tag documentation][servicetags]. Use Azure Resource Manager templates or scripting to configure more advanced rules like regional scoped rules.
+
+> [!NOTE]
+> When creating service tag-based rules through Azure portal or Azure CLI you will need read access at the subscription level to get the full list of service tags for selection/validation. In addition, the `Microsoft.Network` resource provider needs to be registered on the subscription.
### Edit a rule
application-gateway Application Gateway Private Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md
Application Gateway Subnet is the subnet within the Virtual Network where the Ap
## Outbound Internet connectivity
-Application Gateway deployments that contain only a private frontend IP configuration (do not have a public IP frontend configuration) aren't able to egress traffic destined to the Internet. This configuration affects communication to backend targets that are publicly accessible via the Internet.
+Application Gateway deployments that contain only a private frontend IP configuration (do not have a public IP frontend configuration associated to a request routing rule) aren't able to egress traffic destined to the Internet. This configuration affects communication to backend targets that are publicly accessible via the Internet.
To enable outbound connectivity from your Application Gateway to an Internet facing backend target, you can utilize [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) or forward traffic to a virtual appliance that has access to the Internet.
azure-functions Durable Functions Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-portal.md
If you are creating JavaScript Durable Functions, you'll need to install the [`d
1. Go back to the **HttpStart** function, choose **Get function Url**, and select the **Copy to clipboard** icon to copy the URL. You use this URL to start the **HelloSequence** function.
-1. Use one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
-
- The following example is a cURL command that sends a POST request to the durable function:
+1. Use a secure HTTP test tool to send an HTTP POST request to the URL endpoint. This example is a cURL command that sends a POST request to the durable function:
```bash curl -X POST https://{your-function-app-name}.azurewebsites.net/api/orchestrators/{functionName} --header "Content-Length: 0"
If you are creating JavaScript Durable Functions, you'll need to install the [`d
} ```
- [!INCLUDE [api-test-http-request-tools-caution](../../../includes/api-test-http-request-tools-caution.md)]
+ Make sure to choose an HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
1. Call the `statusQueryGetUri` endpoint URI and you see the current status of the durable function, which might look like this example:
azure-functions Durable Functions Isolated Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md
To complete this quickstart, you need:
* [.NET Core SDK](https://dotnet.microsoft.com/download) version 3.1 or later installed.
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] ## <a name="create-an-azure-functions-project"></a>Create an Azure Functions project
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
:::image type="content" source="media/durable-functions-create-first-csharp/isolated-functions-vscode-debugging.png" alt-text="Screenshot of the Azure local output window." lightbox="media/durable-functions-create-first-csharp/isolated-functions-vscode-debugging.png":::
-1. Use one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+1. Use an HTTP test tool to send an HTTP POST request to the URL endpoint.
The response is the HTTP function's initial result. It lets you know that the Durable Functions app orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs.
azure-functions Durable Functions Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-packages.md
Users of [Extension Bundles](../functions-bindings-register.md#extension-bundles
## GitHub repositories
-Durable Functions is developed in the open as OSS. Users are welcome to contribute to it's development, request features, and to report issues in the appropiate repositories:
+Durable Functions is developed in the open as OSS. Users are welcome to contribute to it's development, request features, and to report issues in the appropriate repositories:
* [azure-functions-durable-extension](https://github.com/Azure/azure-functions-durable-extension): For .NET in-process and the Azure Storage storage provider. * [durabletask-dotnet](https://github.com/microsoft/durabletask-dotnet): For .NET isolated.
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
To complete this quickstart, you need:
For Azure Functions _4.x_, Core Tools version 4.0.4915 or later is required.
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
+
* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
:::image type="content" source="media/quickstart-java/maven-functions-run.png" alt-text="Screenshot of Azure local output.":::
-1. Use an HTTP test tool to send an HTTP POST request to the URL endpoint.
-
- [!INCLUDE [api-test-http-request-tools-caution](../../../includes/api-test-http-request-tools-caution.md)]
+1. Use an HTTP test tool to send an HTTP POST request to the URL endpoint.
The response should look similar to the following example:
azure-functions Quickstart Js Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md
To complete this quickstart, you need:
* [Azure Functions Core Tools](../functions-run-local.md) version 4.0.5382 or later installed. ::: zone-end-
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
+
* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. ::: zone pivot="nodejs-model-v3"
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
::: zone pivot="nodejs-model-v3"
-5. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+5. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
::: zone pivot="nodejs-model-v4"
-5. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+5. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
azure-functions Quickstart Powershell Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-powershell-vscode.md
To complete this quickstart, you need:
* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed.
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
+ * An Azure subscription. To use Durable Functions, you must have an Azure Storage account. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
:::image type="content" source="media/quickstart-js-vscode/functions-f5.png" alt-text="Screenshot of Azure local output.":::
-1. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+1. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
To complete this quickstart, you need:
* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed.
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
+ * An Azure subscription. To use Durable Functions, you must have an Azure Storage account. * [Python](https://www.python.org/) version 3.7, 3.8, 3.9, or 3.10 installed.
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
::: zone pivot="python-mode-configuration"
-5. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+5. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
::: zone pivot="python-mode-decorators"
-5. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+5. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
azure-functions Quickstart Ts Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md
To complete this quickstart, you need:
* [Azure Functions Core Tools](../functions-run-local.md) version 4.0.5382 or later installed. ::: zone-end-
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](../functions-develop-local.md#http-test-tools).
+
* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. ::: zone pivot="nodejs-model-v3"
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
::: zone pivot="nodejs-model-v3"
-5. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+5. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
Azure Functions Core Tools gives you the capability to run an Azure Functions pr
::: zone pivot="nodejs-model-v4"
-5. Use your browser or one of these HTTP test tools to send an HTTP POST request to the URL endpoint:
-
- [!INCLUDE [api-test-http-request-tools](../../../includes/api-test-http-request-tools.md)]
+5. Use your browser or an HTTP test tool to send an HTTP POST request to the URL endpoint.
Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
azure-functions Event Grid How Tos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-grid-how-tos.md
To test an Event Grid trigger locally, you have to get Event Grid HTTP requests
1. [Generate a request](#generate-a-request) and copy the request body from the viewer app. 1. [Manually post the request](#manually-post-the-request) to the localhost URL of your Event Grid trigger function.
-To send an HTTP post request, you need an HTTP test tool, like one of these:
-
+To send an HTTP post request, you need an HTTP test tool. Make sure to choose a tool that keeps your data secure. For more information, see [HTTP test tools](functions-develop-local.md#http-test-tools).
When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [`az eventgrid event-subscription update`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-update) Azure CLI command.
azure-functions Functions Create Serverless Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-serverless-api.md
In this article, you learn how to build highly scalable APIs with Azure Function
## Prerequisites
+* An HTTP test tool that keeps your data secure. For more information, see [HTTP test tools](functions-develop-local.md#http-test-tools).
+ [!INCLUDE [Previous quickstart note](../../includes/functions-quickstart-previous-topics.md)] After you create this function app, you can follow the procedures in this article.
Next, test your function to see how it works with the new API surface:
1. Press Enter to confirm that your function is working. You should see the response, "*Hello John*."
-1. You can also call the endpoint with another HTTP method to confirm that the function isn't executed. To do so, use one of these HTTP test tools:
-
+1. You can also call the endpoint with another HTTP method to confirm that the function isn't executed. For HTTP methods other than GET, you need to use a secure [HTTP test tool](functions-develop-local.md#http-test-tools).
## Proxies overview
azure-functions Functions Custom Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-custom-handlers.md
In Azure, [query Application Insights traces](analyze-telemetry-data.md#query-te
### Test custom handler in isolation
-Custom handler apps are a web server process, so it may be helpful to start it on its own and test function invocations by sending mock [HTTP requests](#request-payload) using one of these tools:
-
+Custom handler apps are a web server process, so it may be helpful to start it on its own and test function invocations by sending mock [HTTP requests](#request-payload). For sending HTTP requests with payloads, make sure to choose a tool that keeps your data secure. For more information, see [HTTP test tools](functions-develop-local.md#http-test-tools).
You can also use this strategy in your CI/CD pipelines to run automated tests on your custom handler.
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
When you develop your functions locally, you need to take trigger and binding be
During local testing, you must be running the host provided by Core Tools (func.exe) locally. For more information, see [Azure Functions Core Tools](functions-run-local.md).
+## HTTP test tools
+
+During development, it's easy to call any of your function endpoints from a web browser when they support the HTTP GET method. However, for other HTTP methods that support payloads, such as POST or PUT, you need to use an HTTP test tool to create and send these HTTP requests to your function endpoints.
+
+> [!CAUTION]
+> For scenarios where your requests must include sensitive data, make sure to use a tool that protects your data and reduces the risk of exposing any sensitive data to the public. Sensitive data you should protect might include: credentials, secrets, access tokens, API keys, geolocation data, even personally-identifiable information (PII).
+
+You can keep your data secure by choosing an HTTP test tool that works either offline or locally, doesn't sync your data to the cloud, and doesn't require that you sign in to an online account. Some tools can also protect your data from accidental exposure by implementing specific security features.
+
+Avoid using tools that centrally store your HTTP request history (including sensitive information), don't follow best security practices, or don't respect data privacy concerns.
+
+Consider using one of these tools for securely sending HTTP requests to your function endpoints:
+
+- [Visual Studio Code](https://code.visualstudio.com/download) with an [extension from Visual Studio Marketplace](https://marketplace.visualstudio.com/vscode), such as [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client)
+- [PowerShell Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod)
+- [Microsoft Edge - Network Console tool](/microsoft-edge/devtools-guide-chromium/network-console/network-console-tool)
+- [Bruno](https://www.usebruno.com/)
+- [curl](https://curl.se/)
+ ## Local storage emulator During local development, you can use the local [Azurite emulator](../storage/common/storage-use-azurite.md) when testing functions with Azure Storage bindings (Queue Storage, Blob Storage, and Table Storage), without having to connect to remote storage services. Azurite integrates with Visual Studio Code and Visual Studio, and you can also run it from the command prompt using npm. For more information, see [Use the Azurite emulator for local Azure Storage development](../storage/common/storage-use-azurite.md).
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
The procedure described in this article is equivalent to using the **Test/Run**
## Prerequisites
-The examples in this article use an HTTP test tool. You can obtain and use any of these tools that send HTTP requests:
-
+The examples in this article use an HTTP test tool. Make sure to choose a tool that keeps your data secure. For more information, see [HTTP test tools](functions-develop-local.md#http-test-tools).
## Define the request location
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
curl --request POST http://localhost:7071/api/MyHttpTrigger --data "{'name':'Azu
The following considerations apply when calling HTTP endpoints locally:
-+ You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use an HTTP testing tool that supports POST requests, like one of these:
-
- [!INCLUDE [api-test-http-request-tools](../../includes/api-test-http-request-tools.md)]
++ You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use a HTTP testing tool that also keeps your data secure. For more information, see [HTTP test tools](functions-develop-local.md#http-test-tools). + Make sure to use the same server name and port that the Functions host is listening on. You see an endpoint like this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger.
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
From server web apps:
* [Dependencies](./asp-net-dependencies.md). Calls to SQL databases, HTTP calls to external services, Azure Cosmos DB, Azure Table Storage, Azure Blob Storage, and Azure Queue Storage. * [Exceptions](./asp-net-exceptions.md) and stack traces. * [Performance counters](./performance-counters.md): Performance counters are available when using:-- [Azure Monitor Application Insights agent](application-insights-asp-net-agent.md)-- [Azure monitoring for VMs or virtual machine scale sets](./azure-vm-vmss-apps.md)-- [Application Insights `collectd` writer](/previous-versions/azure/azure-monitor/app/deprecated-java-2x#collectd-linux-performance-metrics-in-application-insights-deprecated).
+ * [Azure Monitor Application Insights agent](application-insights-asp-net-agent.md)
+ * [Azure monitoring for VMs or virtual machine scale sets](./azure-vm-vmss-apps.md)
+ * [Application Insights `collectd` writer](/previous-versions/azure/azure-monitor/app/deprecated-java-2x#collectd-linux-performance-metrics-in-application-insights-deprecated).
* [Custom events and metrics](./api-custom-events-metrics.md) that you code. * [Trace logs](./asp-net-trace-logs.md) if you configure the appropriate collector.
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
reviewer: cweining Previously updated : 11/17/2023 Last updated : 08/21/2024
However, you may experience small CPU, memory, and I/O overhead associated with
The minidump is first written to disk and the amount of disk spaced is roughly the same as the working set of the original process. Writing the minidump can induce page faults as memory is read.
- The minidump is compressed during upload, which consumes both CPU and memory in the Snapshot Uploader process. The CPU, memory, and disk overhead for this is be proportional to the size of the process snapshot. Snapshot Uploader processes snapshots serially.
+ The minidump is compressed during upload, which consumes both CPU and memory in the Snapshot Uploader process. The CPU, memory, and disk overhead for this is proportional to the size of the process snapshot. Snapshot Uploader processes snapshots serially.
**When `TrackException` is called:**
Based on how Snapshot Debugger was enabled, see the following options:
* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of `Microsoft.ApplicationInsights.SnapshotCollector`.
-For the latest updates and bug fixes [consult the release notes](./snapshot-debugger.md#release-notes-for-microsoftapplicationinsightssnapshotcollector).
+For the latest updates and bug fixes [consult the release notes](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/blob/main/CHANGELOG.md).
## Check the uploader logs
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
Enable the Application Insights Snapshot Debugger for your application:
- [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) - [Azure Service Fabric](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) - [Azure Virtual Machines and Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)-- [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)-
-## Release notes for `Microsoft.ApplicationInsights.SnapshotCollector`
-
-This section contains the release notes for the `Microsoft.ApplicationInsights.SnapshotCollector` NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger.
-
-[Learn](./snapshot-debugger.md) more about the Application Insights Snapshot Debugger for .NET applications.
-
-For bug reports and feedback, [open an issue on GitHub](https://github.com/microsoft/ApplicationInsights-SnapshotCollector).
--
-### [1.4.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.6)
-A point release to address a regression when using .NET 8 applications.
-
-#### Bug fixes
-- Exceptions thrown from dynamically generated methods (e.g. compiled expression trees) in .NET 8 are not being tracked correctly. Fixed.-
-### [1.4.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.5)
-A point release to address a user-reported bug.
-
-#### Bug fixes
-- Fixed AccessViolationException when reading some PDBs.-
-#### Changes
-- Added a ReadMe to the NuGet package.-- Updated msdia140.dll.-
-### [1.4.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.4)
-A point release to address user-reported bugs.
-
-#### Bug fixes
-- Fixed [Exception during native component extraction when using a single file application.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/21)-
-#### Changes
-- Lowered PDB scan failure messages from Error to Warning.-- Updated msdia140.dll.-- Avoid making a service connection if the debugger is disabled via site extension settings.-
-### [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3)
-A point release to address user-reported bugs.
-
-#### Bug fixes
-- Fixed [Hide the IMDS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17).-- Fixed [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19).
-<br>Snapshot Collector used via SDK isn't supported when the Interop feature is enabled. See [More not supported scenarios](snapshot-debugger-troubleshoot.md#not-supported-scenarios).
-
-### [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
-A point release to address a user-reported bug.
-
-#### Bug fixes
-Fixed [ArgumentException: Delegates must be of the same type](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/16).
-
-### [1.4.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.1)
-A point release to revert a breaking change introduced in 1.4.0.
-
-#### Bug fixes
-Fixed [Method not found in WebJobs](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/15).
-
-### [1.4.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.0)
-Addressed multiple improvements and added support for Microsoft Entra authentication for Application Insights ingestion.
-
-#### Changes
-- Reduced Snapshot Collector package size by 60% from 10.34 MB to 4.11 MB.-- Targeted netstandard2.0 only in Snapshot Collector.-- Bumped Application Insights SDK dependency to 2.15.0.-- Added back `MinidumpWithThreadInfo` when writing dumps.-- Added `CompatibilityVersion` to improve synchronization between the Snapshot Collector agent and the Snapshot Uploader on breaking changes.-- Changed `SnapshotUploader` LogFile naming algorithm to avoid excessive file I/O in App Service.-- Added `pid`, `role name`, and `process start time` to uploaded blob metadata.-- Used `System.Diagnostics.Process` in Snapshot Collector and Snapshot Uploader.-
-#### New features
-Added Microsoft Entra authentication to `SnapshotCollector`. To learn more about Microsoft Entra authentication in Application Insights, see [Microsoft Entra authentication for Application Insights](../app/azure-ad-authentication.md).
-
-### [1.3.7.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.5)
-A point release to backport a fix from 1.4.0-pre.
-
-#### Bug fixes
-Fixed [ObjectDisposedException on shutdown](https://github.com/microsoft/ApplicationInsights-dotnet/issues/2097).
-
-### [1.3.7.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.4)
-A point release to address a problem discovered in testing the App Service codeless attach scenario.
-
-#### Changes
-The `netcoreapp3.0` target now depends on `Microsoft.ApplicationInsights.AspNetCore` >= 2.1.1 (previously >= 2.1.2).
-
-### [1.3.7.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.3)
-A point release to address a couple of high-impact issues.
-
-#### Bug fixes
-- Fixed PDB discovery in the *wwwroot/bin* folder, which was broken when we changed the symbol search algorithm in 1.3.6.-- Fixed noisy `ExtractWasCalledMultipleTimesException` in telemetry.-
-### [1.3.7](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7)
-#### Changes
-The `netcoreapp2.0` target of `SnapshotCollector` depends on `Microsoft.ApplicationInsights.AspNetCore` >= 2.1.1 (again). This change reverts behavior to how it was before 1.3.5. We tried to upgrade it in 1.3.6, but it broke some App Service scenarios.
-
-#### New features
-Snapshot Collector reads and parses the `ConnectionString` from the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable or from the `TelemetryConfiguration`. Primarily, it's used to set the endpoint for connecting to the Snapshot service. For more information, see the [Connection strings documentation](../app/sdk-connection-string.md).
-
-#### Bug fixes
-Switched to using `HttpClient` for all targets except `net45` because `WebRequest` was failing in some environments because of an incompatible `SecurityProtocol` (requires TLS 1.2).
-
-### [1.3.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.6)
-#### Changes
-- `SnapshotCollector` now depends on `Microsoft.ApplicationInsights` >= 2.5.1 for all target frameworks. This requirement might be a breaking change if your application depends on an older version of the Microsoft.ApplicationInsights SDK.-- Removed support for TLS 1.0 and 1.1 in Snapshot Uploader.-- Period of PDB scans now defaults 24 hours instead of 15 minutes. Configurable via `PdbRescanInterval` on `SnapshotCollectorConfiguration`.-- PDB scan searches top-level folders only, instead of recursive. This change might be a breaking change if your symbols are in subfolders of the binary folder.-
-#### New features
-- Log rotation in `SnapshotUploader` to avoid filling the logs folder with old files.-- Deoptimization support (via ReJIT on attach) for .NET Core 3.0 applications.-- Added symbols to NuGet package.-- Set more metadata when you upload minidumps.-- Added an `Initialized` property to `SnapshotCollectorTelemetryProcessor`. It's a `CancellationToken`, which is canceled when the Snapshot Collector is initialized and connected to the service endpoint.-- Snapshots can now be captured for exceptions in dynamically generated methods. An example is the compiled expression trees generated by Entity Framework queries.-
-#### Bug fixes
-- `AmbiguousMatchException` loading Snapshot Collector due to Status Monitor.-- `GetSnapshotCollector` extension method now searches all `TelemetrySinks`.-- Don't start the Snapshot Uploader on unsupported platforms.-- Handle `InvalidOperationException` when you're deoptimizing dynamic methods (for example, Entity Framework).-
-### [1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5)
-- Added support for sovereign clouds (older versions don't work in sovereign clouds).-- Adding Snapshot Collector made easier by using `AddSnapshotCollector()`. For more information, see [Enable Snapshot Debugger for .NET apps in Azure App Service](./snapshot-debugger-app-service.md).-- Use the FISMA MD5 setting for verifying blob blocks. This setting avoids the default .NET MD5 crypto algorithm, which is unavailable when the OS is set to FIPS-compliant mode.-- Ignore .NET Framework frames when deoptimizing function calls. Control this behavior with the `DeoptimizeIgnoredModules` configuration setting.-- Added the `DeoptimizeMethodCount` configuration setting that allows deoptimization of more than one function call.-
-### [1.3.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.4)
-- Allowed structured instrumentation keys.-- Increased Snapshot Uploader robustness. Continue startup even if old uploader logs can't be moved.-- Reenabled reporting more telemetry when *SnapshotUploader.exe* exits immediately (was disabled in 1.3.3).-- Simplified internal telemetry.-- **Experimental feature:** Snappoint collection plans: Add `snapshotOnFirstOccurence`. For more information, see [this GitHub article](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).-
-### [1.3.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.3)
-Fixed bug that was causing *SnapshotUploader.exe* to stop responding and not upload snapshots for .NET Core apps.
-
-### [1.3.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.2)
-- **Experimental feature:** Snappoint collection plans. For more information, see [this GitHub article](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).-- *SnapshotUploader.exe* exits when the runtime unloads the `AppDomain` from which `SnapshotCollector` is loaded, instead of waiting for the process to exit. This action improves the collector reliability when hosted in IIS.-- Added configuration to allow multiple `SnapshotCollector` instances that are using the same instrumentation key to share the same `SnapshotUploader` process: `ShareUploaderProcess` (defaults to `true`).-- Reported more telemetry when *SnapshotUploader.exe* exits immediately.-- Reduced the number of support files *SnapshotUploader.exe* needs to write to disk.-
-### [1.3.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.1)
-- Removed support for collecting snapshots with the RtlCloneUserProcess API and only support PssCaptureSnapshots API.-- Increased the default limit on how many snapshots can be captured in 10 minutes from one to three.-- Allow *SnapshotUploader.exe* to negotiate TLS 1.1 and 1.2.-- Reported more telemetry when `SnapshotUploader` logs a warning or an error.-- Stop taking snapshots when the back-end service reports the daily quota was reached (50 snapshots per day).-- Added extra check in *SnapshotUploader.exe* to not allow two instances to run in the same time.-
-### [1.3.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.0)
-#### Changes
-- For applications that target .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or later.
-It used to be 2.2.0 or later.
-We believe this change won't be an issue for most applications. Let us know if this change prevents you from using the latest Snapshot Collector.
-- Use exponential back-off delays in the Snapshot Uploader when retrying failed uploads.-- Use `ServerTelemetryChannel` (if available) for more reliable reporting of telemetry.-- Use `SdkInternalOperationsMonitor` on the initial connection to the Snapshot Debugger service so that dependency tracking ignores it.-- Improved telemetry around initial connection to Snapshot Debugger.-- Report more telemetry for the:
- - App Service version.
- - Azure compute instances.
- - Containers.
- - Azure Functions app.
-
-#### Bug fixes
-- When the problem counter reset interval is set to 24 days, interpret that as 24 hours.-- Fixed a bug where the Snapshot Uploader would stop processing new snapshots if there was an exception while disposing a snapshot.-
-### [1.2.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.3)
-Fixed strong-name signing with Snapshot Uploader binaries.
-
-### [1.2.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.2)
-#### Changes
-- The files needed for *SnapshotUploader(64).exe* are now embedded as resources in the main DLL. That means the `SnapshotCollectorFiles` folder is no longer created, which simplifies build and deployment and reduces clutter in Solution Explorer. Take care when you upgrade to review the changes in your `.csproj` file. The `Microsoft.ApplicationInsights.SnapshotCollector.targets` file is no longer needed.-- Telemetry is logged to your Application Insights resource even if `ProvideAnonymousTelemetry` is set to false. This change is so that we can implement a health check feature in the Azure portal. `ProvideAnonymousTelemetry` affects only the telemetry sent to Microsoft for product support and improvement.-- When `TempFolder` or `ShadowCopyFolder` are redirected to environment variables, keep the collector idle until those environment variables are set.-- For applications that connect to the internet via a proxy server, Snapshot Collector now autodetects any proxy settings and passes them on to *SnapshotUploader.exe*.-- Lower the priority of the `SnapshotUploader` process (where possible). This priority can be overridden via the `IsLowPrioirtySnapshotUploader` option.-- Added a `GetSnapshotCollector` extension method on `TelemetryConfiguration` for scenarios where you want to configure the Snapshot Collector programmatically.-- Set the Application Insights SDK version (instead of the application version) in customer-facing telemetry.-- Send the first heartbeat event after two minutes.-
-#### Bug fixes
-- Fixed `NullReferenceException` when exceptions have null or immutable Data dictionaries.-- In the uploader, retry PDB matching a few times if we get a sharing violation.-- Fix duplicate telemetry when more than one thread calls into the telemetry pipeline at startup.-
-### [1.2.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.1)
-#### Changes
-- XML Doc comment files are now included in the NuGet package.-- Added an `ExcludeFromSnapshotting` extension method on `System.Exception` for scenarios where you know you have a noisy exception and want to avoid creating snapshots for it.-- Added an `IsEnabledWhenProfiling` configuration property that defaults to true. This is a change from previous versions where snapshot creation was temporarily disabled if the Application Insights Profiler was performing a detailed collection. The old behavior can be recovered by setting this property to `false`.-
-#### Bug fixes
-- Sign *SnapshotUploader64.exe* properly.-- Protect against double-initialization of the telemetry processor.-- Prevent double logging of telemetry in apps with multiple pipelines.-- Fixed a bug with the expiration time of a collection plan, which could prevent snapshots after 24 hours.-
-### [1.2.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.0)
-The biggest change in this version (hence the move to a new minor version number) is a rewrite of the snapshot creation and handling pipeline. In previous versions, this functionality was implemented in native code (*ProductionBreakpoints*.dll* and *SnapshotHolder*.exe*). The new implementation is all managed code with P/Invokes.
-
-For this first version using the new pipeline, we haven't strayed far from the original behavior. The new implementation allows for better error reporting and sets us up for future improvements.
-
-#### Other changes in this version
-- *MinidumpUploader.exe* has been renamed to *SnapshotUploader.exe* (or *SnapshotUploader64.exe*).-- Added timing telemetry to DeOptimize/ReOptimize requests.-- Added gzip compression for minidump uploads.-- Fixed a problem where PDBs were locked preventing site upgrade.-- Log the original folder name (*SnapshotCollectorFiles*) when shadow-copying.-- Adjusted memory limits for 64-bit processes to prevent site restarts due to OOM.-- Fixed an issue where snapshots were still collected even after disabling.-- Log heartbeat events to customer's AI resource.-- Improved snapshot speed by removing "Source" from the problem ID.-
-### [1.1.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.2)
-
-#### Changes
-- Augmented usage telemetry.-- Detect and report .NET version and OS.-- Detect and report more Azure environments (Azure Cloud Services, Azure Service Fabric).-- Record and report exception metrics (number of first-chance exceptions and the number of `TrackException` calls) in Heartbeat telemetry.-
-#### Bug fixes
-- Correct handling of `SqlException` where the inner exception (Win32Exception) isn't thrown.-- Trimmed trailing spaces on symbol folders, which caused an incorrect parse of command-line arguments to the `MinidumpUploader`.-- Prevented infinite retry of failed connections to the Snapshot Debugger agent's endpoint.-
-### [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0)
-#### Changes
-- Added host memory protection. This feature reduces the impact on the host machine's memory.-- Improved the Azure portal snapshot viewing experience.--
+- [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation"
description: "What's new in Azure Monitor documentation" Previously updated : 04/04/2024 Last updated : 08/21/2024
This article lists significant changes to Azure Monitor documentation.
|Application-Insights|[Live metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|We've updated our Live Metrics documentation so that it links out to both OpenTelemetry and the Classic API code.| |Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|For Java OpenTelemetry, we've documented how to locally disable ingestion sampling. (preview feature)| |Containers|[Enable private link with Container insights](containers/container-insights-private-link.md)|Added guidance for CLI.|
-|Containers|[Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](containers/prometheus-metrics-scrape-configuration.md)|Updated and refrehed|
+|Containers|[Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](containers/prometheus-metrics-scrape-configuration.md)|Updated and refreshed|
|Containers|[Use Prometheus exporters for common workloads with Azure Managed Prometheus](containers/prometheus-exporters.md)|New article listing supported exporters.| |Essentials|[Send Prometheus metrics from virtual machines, scale sets, or Kubernetes clusters to an Azure Monitor workspace](essentials/prometheus-remote-write-virtual-machines.md)|Configure remote write for self-managed Prometheus on a Kubernetes cluster| |General|[Create a metric alert with dynamic thresholds](alerts/alerts-dynamic-thresholds.md)|Added possible values for alert User Response field.|
Logs|[Manage tables in a Log Analytics workspace]()|Refreshed all Log Analytics
Security-Fundamentals|[Monitoring Azure App Service](../../articles/app-service/monitor-app-service.md)|Revised the Azure Monitor overview to improve usability. The article is cleaned up, streamlined, and better reflects the product architecture and the customer experience. | Snapshot-Debugger|[host.json reference for Azure Functions 2.x and later](../../articles/azure-functions/functions-host-json.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.| Snapshot-Debugger|[Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger](profiler/profiler-bring-your-own-storage.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
-Snapshot-Debugger|[Release notes for Microsoft.ApplicationInsights.SnapshotCollector](./snapshot-debugger/snapshot-debugger.md#release-notes-for-microsoftapplicationinsightssnapshotcollector)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
+Snapshot-Debugger|[Release notes for Microsoft.ApplicationInsights.SnapshotCollector](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/blob/main/CHANGELOG.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.| Snapshot-Debugger|[Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](snapshot-debugger/snapshot-debugger-function-app.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.| Snapshot-Debugger|[Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Previously updated : 08/13/2024 Last updated : 08/20/2024 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
If the volume is created in an auto QoS capacity pool, the value displayed in this field is (quota x service level throughput). * **Enable Cool Access**, **Coolness Period**, and **Cool Access Retrieval Policy**
- These fields configure [standard storage with cool access in Azure NetApp Files](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md).
+ These fields configure [Azure NetApp Files storage with cool access](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md).
* **Virtual network** Specify the Azure virtual network (VNet) from which you want to access the volume.
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
Previously updated : 08/13/2024 Last updated : 08/20/2024 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
If the volume is created in an auto QoS capacity pool, the value displayed in this field is (quota x service level throughput). * **Enable Cool Access**, **Coolness Period**, and **Cool Access Retrieval Policy**
- These fields configure [standard storage with cool access in Azure NetApp Files](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md).
+ These fields configure [Azure NetApp Files storage with cool access](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md).
* **Virtual network** Specify the Microsoft Azure Virtual Network from which you want to access the volume.
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
Previously updated : 01/11/2024 Last updated : 08/20/2024
Azure NetApp Files is designed to provide high-performance file storage for ente
| Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more demanding workloads on smaller Azure VMs. | Improve application performance at a smaller VM footprint, improving overall efficiency and lowering application license cost. | Deep workload readiness | Seamless deployment and migration of any-size workload with well-documented deployment guides. | Easily migrate any workload of any size to the platform. Enjoy a seamless, cost-effective deployment and migration experience. | Datastores for Azure VMware Solution | Use Azure NetApp Files as a storage solution for VMware workloads in Azure, reducing the need for superfluous compute nodes normally included with Azure VMware Solution expansions. | Save money by eliminating the need for unnecessary compute nodes when you expand storage, resulting in significant cost savings.
-| Standard storage with cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure Storage account (the cool tier). | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower-cost storage (the cool tier). |
+| Cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure Storage account (the cool tier). | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower-cost storage (the cool tier). |
These features work together to provide a high-performance file storage solution for the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency, cost, and scale.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
In the topology illustrated above, the on-premises network is connected to a hub
* [Configure network features for an Azure NetApp Files volume](configure-network-features.md) * [Virtual network peering](../virtual-network/virtual-network-peering-overview.md) * [Configure Virtual WAN for Azure NetApp Files](configure-virtual-wan.md)
-* [Standard storage with cool access in Azure NetApp Files](cool-access-introduction.md)
-* [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md)
+* [Azure NetApp Files storage with cool access](cool-access-introduction.md)
+* [Manage Azure NetApp Files storage with cool access](manage-cool-access.md)
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
Previously updated : 08/02/2022 Last updated : 08/20/2024 # Service levels for Azure NetApp Files
Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Stand
* <a name="Standard"></a>Standard storage: The Standard service level provides up to 16 MiB/s of throughput per 1 TiB of capacity provisioned.
- * Standard storage with cool access:
- The throughput experience for this service level is the same as the Standard service level for data that is in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Standard storage with cool access in Azure NetApp Files](cool-access-introduction.md#effects-of-cool-access-on-data).
- * <a name="Premium"></a>Premium storage: The Premium service level provides up to 64 MiB/s of throughput per 1 TiB of capacity provisioned. * <a name="Ultra"></a>Ultra storage: The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned.
+* Storage with cool access:
+ Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md#effects-of-cool-access-on-data).
+ ## Throughput limits The throughput limit for a volume is determined by the combination of the following factors:
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
Previously updated : 08/08/2024 Last updated : 08/20/2024 # Create a capacity pool for Azure NetApp Files
Creating a capacity pool enables you to create volumes within it.
>[!NOTE] >[!INCLUDE [Limitations for capacity pool minimum of 1 TiB](includes/2-tib-capacity-pool.md)]
- * **Enable cool access** *(for Standard service level only)*
- This option specifies whether volumes in the capacity pool support cool access. This option is currently supported for the Standard service level only. For details about using this option, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md).
+ * **Enable cool access**
+ This option specifies whether volumes in the capacity pool support cool access. For details about using this option, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md).
* **QoS** Specify whether the capacity pool should use the **Manual** or **Auto** QoS type. See [Storage Hierarchy](azure-netapp-files-understand-storage-hierarchy.md) and [Performance Considerations](azure-netapp-files-performance-considerations.md) to understand the QoS types.
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
Previously updated : 07/18/2024 Last updated : 08/20/2024 # Storage hierarchy of Azure NetApp Files
Understanding how capacity pools work helps you select the right capacity pool t
- You can't move a capacity pool across NetApp accounts. For example, in the [Conceptual diagram of storage hierarchy](#conceptual_diagram_of_storage_hierarchy), you can't move Capacity Pool 1 US East NetApp account to US West 2 NetApp account. - You can't delete a capacity pool until you delete all volumes within the capacity pool. -- You can configure a Standard service-level capacity pool with the cool access option. For more information about cool access, see [Standard storage with cool access](cool-access-introduction.md).
+- You can configure a Standard, Premium, or Ultra service-level capacity pool with the cool access option. For more information about cool access, see [Azure NetApp Files storage with cool access](cool-access-introduction.md).
### <a name="qos_types"></a>Quality of Service (QoS) types for capacity pools
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Title: Standard storage with cool access in Azure NetApp Files
-description: Explains how to use standard storage with cool access to configure inactive data to move from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier).
+ Title: Azure NetApp Files storage with cool access
+description: Explains how to use Azure NetApp Files storage with cool access to configure inactive data to move from Azure NetApp Files service-level storage (the hot tier) to an Azure storage account (the cool tier).
Previously updated : 06/06/2024 Last updated : 08/20/2024
-# Standard storage with cool access in Azure NetApp Files
+# Azure NetApp Files storage with cool access
-Using Azure NetApp Files standard storage with cool access, you can configure inactive data to move from Azure NetApp Files Standard service-level storage (the *hot tier*) to an Azure storage account (the *cool tier*). Enabling cool access moves inactive data blocks from the volume and the volume's snapshots to the cool tier, resulting in cost savings.
+Using Azure NetApp Files storage with cool access, you can configure inactive data to move from Azure NetApp Files storage (the *hot tier*) to an Azure storage account (the *cool tier*). Enabling cool access moves inactive data blocks from the volume and the volume's snapshots to the cool tier, resulting in cost savings.
Most cold data is associated with unstructured data. It can account for more than 50% of the total storage capacity in many storage environments. Infrequently accessed data associated with productivity software, completed projects, and old datasets are an inefficient use of a high-performance storage.
-Azure NetApp Files supports three [service levels](azure-netapp-files-service-levels.md) that can be configured at capacity pool level (Standard, Premium and Ultra). Cool access is an additional service only on the Standard service level.
+Azure NetApp Files supports cool access with three [service levels](azure-netapp-files-service-levels.md) (Standard, Premium and Ultra).
The following diagram illustrates an application with a volume enabled for cool access.
In the initial write, data blocks are assigned a "warm" temperature value (in th
By `Default` (unless cool access retrieval policy is configured otherwise), data blocks on the cool tier that are read randomly again become "warm" and are moved back to the hot tier. Once marked as _warm_, the data blocks are again subjected to the temperature scan. However, large sequential reads (such as index and antivirus scans) on inactive data in the cool tier don't "warm" the data nor do they trigger inactive data to be moved back to the hot tier.
->[!IMPORTANT]
->If you're using a third-party backup service, configure it to use NDMP instead of the CIFS (Common Internet File System) or NFS protocols. NDMP reads do not affect the temperature of the data.
- Metadata is never cooled and always remains in the hot tier. As such, the activities of metadata-intensive workloads (for example, high file-count environments like chip design, VCS, and home directories) aren't affected by tiering. ## Supported regions
-Standard storage with cool access is supported for the following regions:
+Azure NetApp Files storage with cool access is supported for the following regions:
* Australia Central * Australia Central 2
Cool access offers [performance metrics](azure-netapp-files-metrics.md#cool-acce
## Billing
-You can enable tiering at the volume level for a newly created capacity pool that uses the Standard service level. How you're billed is based on the following factors:
+You can enable tiering at the volume level for a newly created capacity pool. How you're billed is based on the following factors:
-* The capacity in the Standard service level
+* The capacity and the service level
* Unallocated capacity within the capacity pool
-* The capacity in the cool tier (by enabling tiering for volumes in a Standard capacity pool)
+* The capacity in the cool tier
* Network transfer between the hot tier and the cool tier at the rate that is determined by the markup on top of the transaction cost (`GET` and `PUT` requests) on blob storage and private link transfer in either direction between the hot tiers.
-Billing calculation for a Standard capacity pool is at the hot-tier rate for the data that isn't tiered to the cool tier; this includes unallocated capacity within the capacity pool. When you enable tiering for volumes, the capacity in the cool tier will be at the rate of the cool tier, and the remaining capacity will be at the rate of the hot tier. The rate of the cool tier is lower than the hot tier's rate.
+Billing calculation for a capacity pool is at the hot-tier rate for the data that isn't tiered to the cool tier; this includes unallocated capacity within the capacity pool. When you enable tiering for volumes, the capacity in the cool tier will be at the rate of the cool tier, and the remaining capacity will be at the rate of the hot tier. The rate of the cool tier is lower than the hot tier's rate.
### Examples of billing structure
Your first twelve-month savings:
> [!TIP]
-> You can use the [Azure NetApp Files standard storage with cool access cost savings estimator](https://aka.ms/anfcoolaccesscalc) to interactively estimate cost savings based on changeable input parameters.
+> You can use the [Azure NetApp Files storage with cool access cost savings estimator](https://aka.ms/anfcoolaccesscalc) to interactively estimate cost savings based on changeable input parameters.
## Next steps
-* [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md)
+* [Manage Azure NetApp Files storage with cool access](manage-cool-access.md)
* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
Previously updated : 08/13/2024 Last updated : 08/20/2024 # Create a dual-protocol volume for Azure NetApp Files
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
If the volume is created in an auto QoS capacity pool, the value displayed in this field is (quota x service level throughput). * **Enable Cool Access**, **Coolness Period**, and **Cool Access Retrieval Policy**
- These fields configure [standard storage with cool access in Azure NetApp Files](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md).
+ These fields configure [Azure NetApp Files storage with cool access](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md).
* **Virtual network** Specify the Azure virtual network (VNet) from which you want to access the volume.
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
Previously updated : 08/14/2024 Last updated : 08/20/2024
This article describes requirements and considerations about [using the volume c
* You can revert a source or destination volume of a cross-region replication to a snapshot, provided the snapshot is newer than the most recent SnapMirror snapshot. Snapshots older than the SnapMirror snapshot can't be used for a volume revert operation. For more information, see [Revert a volume using snapshot revert](snapshots-revert-volume.md). * Data replication volumes support [customer-managed keys](configure-customer-managed-keys.md). * If you are copying large data sets into a volume that has cross-region replication enabled and you have spare capacity in the capacity pool, you should set the replication interval to 10 minutes, increase the volume size to allow for the changes to be stored, and temporarily disable replication.
-* If you use the cool access feature, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md#considerations) for more considerations.
+* If you use the cool access feature, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md#considerations) for more considerations.
* [Large volumes](large-volumes-requirements-considerations.md) are supported with cross-region replication only with an hourly or daily replication schedule. ## Large volumes configuration
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
Previously updated : 05/11/2023 Last updated : 08/20/2024 # Dynamically change the service level of a volume
-You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not affect access to the volume.
+You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data nor does it affect access to the volume.
-This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is currently in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
+This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is currently in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
-The capacity pool that you want to move the volume to must already exist. The capacity pool can contain other volumes. If you want to move the volume to a brand-new capacity pool, you need to [create the capacity pool](azure-netapp-files-set-up-capacity-pool.md) before you move the volume.
+The capacity pool that you want to move the volume to must already exist. The capacity pool can contain other volumes. If you want to move the volume to a brand-new capacity pool, you need to [create the capacity pool](azure-netapp-files-set-up-capacity-pool.md) before you move the volume.
## Considerations * This functionality is supported within the same NetApp account. You can't move the volume to a capacity pool in a different NetApp Account.
-* After the volume is moved to another capacity pool, you'll no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
+* After the volume is moved to another capacity pool, you no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.
The capacity pool that you want to move the volume to must already exist. The ca
* Regardless of the source poolΓÇÖs QoS type, when the target pool is of the *auto* QoS type, the volume's throughput is changed with the move to match the service level of the target capacity pool.
-* If you use standard storage with cool access, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md#considerations) for more considerations.
+* If you use cool access, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md#considerations) for more considerations.
## Move a volume to another capacity pool
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
The following requirements and considerations apply to large volumes. For perfor
For the latest performance benchmark numbers conducted on Azure NetApp Files Large volumes, see [Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md) and [Benefits of using Azure NetApp Files for Electronic Design Automation (EDA)](solutions-benefits-azure-netapp-files-electronic-design-automation.md).
-* Large volumes aren't currently supported with standard storage with cool access.
+* Large volumes aren't currently supported with cool access.
## About 64-bit file IDs
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Title: Manage Azure NetApp Files standard storage with cool access
+ Title: Manage Azure NetApp Files storage with cool access
description: Learn how to free up storage by configuring inactive data to move from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier). Previously updated : 01/16/2023 Last updated : 08/20/2024
-# Manage Azure NetApp Files standard storage with cool access
+# Manage Azure NetApp Files storage with cool access
-Using Azure NetApp Files [standard storage with cool access](cool-access-introduction.md), you can configure inactive data to move from Azure NetApp Files Standard service-level storage (the *hot tier*) to an Azure storage account (the *cool tier*). In doing so, you reduce the total cost of ownership of your data stored in Azure NetApp Files.
+Using Azure NetApp Files [storage with cool access](cool-access-introduction.md), you can configure inactive data to move from Azure NetApp Files storage (the *hot tier*) to an Azure storage account (the *cool tier*). In doing so, you reduce the total cost of ownership of your data stored in Azure NetApp Files.
-The standard storage with cool access feature allows you to configure a Standard capacity pool with cool access. The Standard storage service level with cool access feature moves cold (infrequently accessed) data from the volume and the volume's snapshots to the Azure storage account to help you reduce the cost of storage. Throughput requirements remain the same for the Standard service level enabled with cool access. However, there can be a difference in data access latency because the data needs to be read from the Azure storage account.
+The cool access feature allows you to configure a capacity pool with cool access. The storage service level with cool access feature moves cold (infrequently accessed) data from the volume and the volume's snapshots to the Azure storage account to help you reduce the cost of storage. Throughput requirements remain the same for the service level (Standard, Premium, Ultra) enabled with cool access. However, there can be a difference in data access latency because the data needs to be read from the Azure storage account.
-The standard storage with cool access feature provides options for the ΓÇ£coolness periodΓÇ¥ to optimize the network transfer cost, based on your workload and read/write patterns. This feature is provided at the volume level. See the [Set options for coolness period section](#modify_cool) for details. The standard storage with cool access feature also provides metrics on a per-volume basis. See the [Metrics section](cool-access-introduction.md#metrics) for details.
+The storage with cool access feature provides options for the ΓÇ£coolness periodΓÇ¥ to optimize the network transfer cost, based on your workload and read/write patterns. This feature is provided at the volume level. See the [Set options for coolness period section](#modify_cool) for details. The storage with cool access feature also provides metrics on a per-volume basis. See the [Metrics section](cool-access-introduction.md#metrics) for details.
## Considerations * No guarantee is provided for any maximum latency for client workload for any of the service tiers.
-* This feature is available only at the **Standard** service level. It's not supported for the Ultra or Premium service level.
-* Although cool access is available for the Standard service level, how you're billed for using the feature differs from the Standard service level charges. See the [Billing section](cool-access-introduction.md#billing) for details and examples.
-* You can convert an existing Standard service-level capacity pool into a cool-access capacity pool to create cool access volumes. However, once the capacity pool is enabled for cool access, you can't convert it back to a non-cool-access capacity pool.
+* Although cool access is available for the Standard, Premium, and Ultra service levels, how you're billed for using the feature differs from the hot tier service level charges. See the [Billing section](cool-access-introduction.md#billing) for details and examples.
+* You can convert an existing capacity pool into a cool-access capacity pool to create cool access volumes. However, once the capacity pool is enabled for cool access, you can't convert it back to a non-cool-access capacity pool.
* A cool-access capacity pool can contain both volumes with cool access enabled and volumes with cool access disabled. * To prevent data retrieval from the cool tier to the hot tier during sequential read operations (for example, antivirus or other file scanning operations), set the cool access retrieval policy to "Default" or "Never." For more information, see [Enable cool access on a new volume](#enable-cool-access-on-a-new-volume). * After the capacity pool is configured with the option to support cool access volumes, the setting can't be disabled at the _capacity pool_ level. However, you can turn on or turn off the cool access setting at the volume level anytime. Turning off the cool access setting at the _volume_ level stops further tiering of data.ΓÇ»
-* You can't use [large volume](large-volumes-requirements-considerations.md) with Standard storage with cool access.
+* You can't use [large volume](large-volumes-requirements-considerations.md) with cool access.
* See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits) for maximum number of volumes supported for cool access per subscription per region. * Considerations for using cool access with [cross-region replication](cross-region-replication-requirements-considerations.md) and [cross-zone replication](cross-zone-replication-introduction.md): * The cool access setting on the destination is updated automatically to match the source volume whenever the setting is changed on the source volume or during authorizing or performing a reverse resync of the replication. Changes to the cool access setting on the destination volume don't affect the setting on the source volume.
The standard storage with cool access feature provides options for the ΓÇ£coolne
* If you move a cool access volume to another capacity pool (service level change), that pool must also be enabled for cool access. * If you disable cool access and turn off tiering on a cool access volume (that is, the volume no longer uses cool access), you canΓÇÖt move it to a non-cool-access capacity pool. In a cool access capacity pool, all volumes, *whether enabled for cool access or not*, can only be moved to another cool access capacity pool.
-## Register the feature
+## Enable cool access
+
+You must register for cool access before you can enable it at the capacity pool and volume levels.
+
+### Register the feature
+
+Azure NetApp Files storage with cool access is generally available. Before using cool access for the first time, you must register for the feature with the service level you intend to use it for.
-This feature is currently in preview. You need to register the feature before using it for the first time. After registration, the feature is enabled and works in the background. No UI control is required.
+# [Standard](#tab/standard)
+
+After registration, the feature is enabled and works in the background. No UI control is required.
1. Register the feature:
This feature is currently in preview. You need to register the feature before us
``` You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
-## Enable cool access
+# [Premium](#tab/premium)
+
+You must submit a waitlist request to accessing this feature using the [request form](https://aka.ms/ANFcoolaccesssignup). The feature can take approximately one week to be enabled after you submit the waitlist request. Check the status of feature registration by using the command:
+
+```azurepowershell-interactive
+Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFCoolAccessPremium
+```
-To use the Standard storage with cool access feature, you need to configure the feature at the capacity pool level and the volume level.
+# [Ultra](#tab/ultra)
+
+You must submit a waitlist request to accessing this feature using the [request form](https://aka.ms/ANFcoolaccesssignup). The feature can take approximately one week to be enabled after you submit the waitlist request. Check the status of feature registration by using the command:
+
+```azurepowershell-interactive
+Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFCoolAccessUltra
+```
++ ### Configure the capacity pool for cool access
-Before creating or enabling a cool-access volume, you need to configure a Standard service-level capacity pool with cool access. You can do so in one of the following ways:
+Before creating or enabling a cool-access volume, you need to configure a capacity pool with cool access. You can do so in one of the following ways:
-* [Create a new Standard service-level capacity pool with cool access.](#enable-cool-access-new-pool)
-* [Modify an existing Standard service-level capacity pool to support cool-access volumes.](#enable-cool-access-existing-pool)
+* [Create a new capacity pool with cool access.](#enable-cool-access-new-pool)
+* [Modify an existing capacity pool to support cool-access volumes.](#enable-cool-access-existing-pool)
#### <a name="enable-cool-access-new-pool"></a> Enable cool access on a new capacity pool  
-1. [Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md) with the **Standard** service level.
-1. Check the **Enable Cool Access** checkbox, then select **Create**.
+1. [Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md).
+1. Check the **Enable Cool Access** checkbox then select **Create**.
#### <a name="enable-cool-access-existing-pool"></a> Enable cool access on an existing capacity pool
-You can enable cool access support on an existing Standard service-level capacity pool. This action allows you to add or modify volumes in the pool to use cool access.
+You can enable cool access support on an existing capacity pool. This action allows you to add or modify volumes in the pool to use cool access.
-1. Right-click a **Standard** service-level capacity pool for which you want to enable cool access.
+1. Right-click the capacity pool for which you want to enable cool access.
2. Select **Enable Cool Access**:
You can enable cool access support on an existing Standard service-level capacit
### Configure a volume for cool access
-Standard storage with cool access can be enabled during the creation of a volume and on existing volumes that are part of a capacity pool that has cool access enabled.
+Azure NetApp Files storage with cool access can be enabled during the creation of a volume and on existing volumes that are part of a capacity pool that has cool access enabled.
#### Enable cool access on a new volume
Standard storage with cool access can be enabled during the creation of a volume
* **Cool Access Retrieval Policy**
- This option specifies under which conditions data is moved back to the hot tier. You can set this option to `Default`, `On-Read`, or `Never`.
+ This option specifies under which conditions data moves back to the hot tier. You can set this option to `Default`, `On-Read`, or `Never`.
The following list describes the data retrieval behavior with the cool access retrieval policy settings: * *Cool access is **enabled***: * If no value is set for cool access retrieval policy:
- The retrieval policy is set to `Default`, and cold data is retrieved to the hot tier only when performing random reads. Sequential reads are served directly from the cool tier.
+ The retrieval policy will be set to `Default`, and cold data will be retrieved to the hot tier only when performing random reads. Sequential reads will be served directly from the cool tier.
* If cool access retrieval policy is set to `Default`:
- Cold data is retrieved only by performing random reads.
+ Cold data will be retrieved only by performing random reads.
* If cool access retrieval policy is set to `On-Read`:
- Cold data is retrieved by performing both sequential and random reads.
+ Cold data will be retrieved by performing both sequential and random reads.
* If cool access retrieval policy is set to `Never`:
- Cold data is served directly from the cool tier and not retrieved to the hot tier.
+ Cold data is served directly from the cool tier and not be retrieved to the hot tier.
* *Cool access is **disabled**:* * You can set a cool access retrieval policy if cool access is disabled only if there's existing data on the cool tier. * Once you disable the cool access setting on the volume, the cool access retrieval policy remains the same.
Standard storage with cool access can be enabled during the creation of a volume
#### Enable cool access on an existing volume
-In a Standard service-level, cool-access enabled capacity pool, you can enable an existing volume to support cool access.
+In a cool-access enabled capacity pool, you can enable an existing volume to support cool access.
1. Right-click the volume for which you want to enable the cool access. 1. In the **Edit** window that appears, set the following options for the volume: * **Enable Cool Access**
- This option specifies whether the volume supports cool access.
+ This option specifies whether the volume will support cool access.
* **Coolness Period** This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 2 and 183 days. * **Cool Access Retrieval Policy**
- This option specifies under which conditions data is moved back to the hot tier. You can set this option to `Default`, `On-Read`, or `Never`.
+ This option specifies under which conditions data moves back to the hot tier. You can set this option to `Default`, `On-Read`, or `Never`.
The following list describes the data retrieval behavior with the cool access retrieval policy settings: * *Cool access is **enabled***: * If no value is set for cool access retrieval policy:
- The retrieval policy is set to `Default`, and cold data is retrieved to the hot tier only when performing random reads. Sequential reads are served directly from the cool tier.
+ The retrieval policy will be set to `Default`, and cold data will be retrieved to the hot tier only when performing random reads. Sequential reads will be served directly from the cool tier.
* If cool access retrieval policy is set to `Default`:
- Cold data is retrieved only by performing random reads.
+ Cold data will be retrieved only by performing random reads.
* If cool access retrieval policy is set to `On-Read`:
- Cold data is retrieved by performing both sequential and random reads.
+ Cold data will be retrieved by performing both sequential and random reads.
* If cool access retrieval policy is set to `Never`:
- Cold data is served directly from the cool tier and not be retrieved to the hot tier.
+ Cold data will be served directly from the cool tier and not be retrieved to the hot tier.
* *Cool access is **disabled**:* * You can set a cool access retrieval policy if cool access is disabled only if there's existing data on the cool tier. * Once you disable the cool access setting on the volume, the cool access retrieval policy remains the same.
Based on the client read/write patterns, you can modify the cool access configur
1. In the **Edit** window that appears, update the **Coolness Period** and **Cool Access Retrieval Policy** fields as needed. ## Next steps
-* [Standard storage with cool access in Azure NetApp Files](cool-access-introduction.md)
+* [Azure NetApp Files storage with cool access](cool-access-introduction.md)
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
Previously updated : 03/22/2024 Last updated : 08/20/2024
* Currently, you can [restore a snapshot only to a new volume](snapshots-introduction.md#restoring-cloning-an-online-snapshot-to-a-new-volume).
-* If you use the standard storage with cool access feature, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md#considerations) for more considerations.
+* If you use the cool access feature, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md#considerations) for more considerations.
* Cross-region replication and cross-zone replication operations are suspended and cannot be added while restoring a snapshot to a new volume.
azure-netapp-files Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/tools-reference.md
Previously updated : 01/12/2023 Last updated : 08/20/2024
Azure NetApp Files offers [multiple tools](https://aka.ms/anftools) to estimate
This comprehensive tool estimates the infrastructure costs of an SAP HANA on Azure NetApp Files landscape. The estimate includes primary storage, backup, and replication costs.
-* [**Azure NetApp Files Standard storage with cool access cost savings estimator**](https://aka.ms/anfcoolaccesscalc)
+* [**Azure NetApp Files storage with cool access cost savings estimator**](https://aka.ms/anfcoolaccesscalc)
- Standard storage with cool access enables you to transparently move infrequently accessed data to less expensive storage. This cost savings estimator helps you understand how much money you can save by enabling Standard storage with cool access.
+ Azure NetApp Files storage with cool access enables you to transparently move infrequently accessed data to less expensive storage. This cost savings estimator helps you understand how much money you can save by enabling storage with cool access.
* [**Azure NetApp Files Region and Feature Map**](https://aka.ms/anfmap)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Previously updated : 08/07/2024 Last updated : 08/20/2024
Azure NetApp Files is updated regularly. This article provides a summary about t
## August 2024
+* [Azure NetApp Files storage with cool access](cool-access-introduction.md) is now generally available (GA) and supported with the Standard, Premium, and Ultra service levels. Cool access is also now supported for destination volumes in cross-region/cross-zone relationships.
+
+ With the announcement of cool access's general availability, you can now enable cool access for volumes in Premium and Ultra service level capacity pools, in addition to volumes in Standard service levels capacity pools. With cool access, you can transparently store data in a more cost effective manner on Azure storage accounts based on the data's access pattern.
+
+ The cool access feature provides the ability to configure a capacity pool with cool access, that moves cold (infrequently accessed) data transparently to Azure storage account to help you reduce the total cost of storage. There's a difference in data access latency as data blocks might be tiered to Azure storage account. The cool access feature provides options for the "coolness period" to optimize the days in which infrequently accessed data moves to cool tier and network transfer cost, based on your workload and read/write patterns. The "coolness period" feature is provided at the volume level.
+
+ In a cross-region or cross-zone replication setting, cool access can now be configured for destination only volumes to ensure data protection. This capability provides cost savings without any latency impact on source volumes.
+
+ You still must [register the feature](manage-cool-access.md#register-the-feature) before enabling cool access.
+ * [Volume encryption with customer-managed keys with managed Hardware Security Module (HSM)](configure-customer-managed-keys-hardware.md) (Preview) Volume encryption with customer-managed keys with managed HSM extends the [customer-managed keys](configure-customer-managed-keys.md), enabling you to store your keys in a more secure FIPS 140-2 Level 3 HSM service instead of the FIPS 140-2 Level 1 or 2 encryption offered with Azure Key Vault.
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> [!NOTE] > You can't add a tag to a virtual machine that has been marked as generalized. You mark a virtual machine as generalized with [Set-AzVm -Generalized](/powershell/module/Az.Compute/Set-AzVM) or [az vm generalize](/cli/azure/vm#az-vm-generalize).
+> Tags on virtual machine extensions can only be updated when the VM is running.
## Microsoft.ConfidentialLedger
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Microsoft regularly applies important updates to the Azure VMware Solution for n
All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version in Azure Commercial. [Learn more](architecture-private-clouds.md#vmware-software-versions)
-**Azure VMware Solution in Microsoft Azure Government**
-
-Azure VMware Solution has achieved Department of Defense (DoD) Impact Level 4 (IL4) authorization in Microsoft Azure Government.
+Azure VMware Solution was approved to be added as a service within the DoD SRG Impact Level 4 Provisional Authorization (PA) in [Microsoft Azure Government](https://azure.microsoft.com/explore/global-infrastructure/government/#why-azure).
## May 2024
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Title: Support Matrix for Azure file share backup by using Azure Backup description: Provides a summary of support settings and limitations when backing up Azure file shares. Previously updated : 03/29/2024 Last updated : 08/16/2024
Vaulted backup for Azure Files (preview) is available in West Central US, Southe
+## Daylight savings
+
+Azure Backup doesn't support automatic clock adjustment for daylight saving time for Azure VM backups. It doesn't shift the hour of the backup forward or backwards. To ensure the backup runs at the desired time, modify the backup policies manually as required.
+
+## Support for customer-managed failover
+
+This section describes how your backups and restores are affected after customer-managed failovers.
+
+The following table lists the behavior of backups due to customer-initiated failovers:
+
+| Failover type | Backups | Restore | Enabling protection (re-protection) of failed over account in secondary region |
+| | | | |
+| Customer-managed planned failover | Supported | Supported | Not supported |
+| Customer-managed unplanned failover | Not supported | Only cross-region restore from the vault is supported. | Not supported |
++ ## Next steps * Learn how to [Back up Azure file shares](backup-afs.md)
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
The following table lists the main properties of `room` objects:
| `validFrom` | Earliest time a `room` can be used. | | `validUntil` | Latest time a `room` can be used. | | `pstnDialOutEnabled` | Enable or disable dialing out to a PSTN number in a room.|
-| `participants` | List of participants to a `room`. Specified as a `CommunicationIdentifier`. |
+| `participants` | List of participants to a `room`. Specified as a `CommunicationUserIdentifier`. |
| `roleType` | The role of a room participant. Can be either `Presenter`, `Attendee`, or `Consumer`. | ::: zone pivot="platform-azcli"
communication-services Manage Rooms Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/manage-rooms-call.md
call_connection_properties = client.connect_call(call_locator=room_call_locator,
Once successfully connected to a room call, a `CallConnect` event is notified via Callback URI. You can use `callConnectionId` to retrieve a call connection on the room call as needed. The following sample code snippets use the `callConnectionId` to demonstrate this function.
-### Add PSTN Participant
+### Add PSTN participant
Using Call Automation you can dial out to a PSTN number and add the participant into a room call. You must, however, set up a room to enable PSTN dial-out option (`EnabledPSTNDialout` set to `true`) and the Azure Communication Services resource must have a valid phone number provisioned. For more information, see [Rooms quickstart](../../quickstarts//rooms/get-started-rooms.md?tabs=windows&pivots=platform-azcli#enable-pstn-dial-out-capability-for-a-room).
result = call_connection_client.add_participant(
``` --
-### Remove PSTN Participant
+### Remove PSTN participant
### [csharp](#tab/csharp)
result = call_connection_client.send_dtmf_tones(
``` --
-### Call Recording
+### Call recording
Azure Communication Services rooms support recording capabilities including `start`, `stop`, `pause`, `resume`, and so on, provided by Call Automation. See the following code snippets to start/stop/pause/resume a recording in a room call. For a complete list of actions, see [Call Automation recording](../../concepts/voice-video-calling/call-recording.md#get-full-control-over-your-recordings-with-our-call-recording-apis). ### [csharp](#tab/csharp)
stop_recording = call_automation_client.stop_recording(recording_id = recording_
``` --
-### Terminate a Call
+### Terminate a call
You can use the Call Automation SDK Hang Up action to terminate a call. When the Hang Up action completes, the SDK publishes a `CallDisconnected` event. ### [csharp](#tab/csharp)
call_connection_client.hang_up(is_for_everyone=True)
``` --
-## Other Actions
+## Other actions
The following in-call actions are also supported in a room call. 1. Add participant (ACS identifier) 1. Remove participant (ACS identifier)
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Previously updated : 12/07/2023 Last updated : 01/01/2024
-# What's new in Azure Communication Services, Holiday Edition, 2023
+# What's new in Azure Communication Services
-We created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services. Be sure to check back monthly for all the newest and latest information!
+We created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services.
-We're combining the November and December updates into one. **Have a terrific holiday, everyone!**
-<br>
-<br>
-<br>
+## May 2024
+### Data Retention with Chat threads
-## New features
-Get detailed information on the latest Azure Communication Services feature launches.
+Developers can now create chat threads with a retention policy between 30 and 90 days. This feature is in public preview.
-### Call Diagnostics now available in Public Preview
+This policy is optional ΓÇô developers can choose to create a chat thread with infinite retention (as always) or set a retention policy between 30 and 90 days. If the thread needs to be kept for longer than 90 days, you can extend the time using the update chat thread property API. The policy is geared for data management in organizations that need to move data into their archives for historical purposes or delete the data within a given period.
+
+Existing chat threads aren't affected by the policy.
+
+For more information, see:
+- [Chat concepts](./concepts/chat/concepts.md#chat-data)
+- [Create Chat Thread - REST API](/rest/api/communication/chat/chat/create-chat-thread#noneretentionpolicy)
+- [Update Chat Thread Properties - REST API](/rest/api/communication/chat/chat-thread/update-chat-thread-properties#noneretentionpolicy)
+
+### PowerPoint Live
+
+Now in general availability, PPT Live gives both the presenter and audience an inclusive and engaging experience. PPT Live combines the best parts of presenting in PowerPoint with the connection and collaboration of a Microsoft Teams meeting.
++
+Meeting participants can now view PowerPoint Live sessions initiated by a Teams client using the Azure Communication Services Web UI Library. Participants can follow along with a presentation and view presenter annotations. Developers can use this function via our composites including `CallComposite` and `CallWithChatComposite`, and through components such as `VideoGallery`.
+
+For more information, see [Introducing PowerPoint Live in Microsoft Teams](https://techcommunity.microsoft.com/t5/microsoft-365-blog/introducing-powerpoint-live-in-microsoft-teams/ba-p/2140980) and [Present from PowerPoint Live in Microsoft Teams](https://support.microsoft.com/en-us/office/present-from-powerpoint-live-in-microsoft-teams-28b20e74-7165-499c-9bd4-0ad975d448ad).
+
+### Live Reactions
+
+During live calls, participants can react with emojis: like, love, applause, laugh, and surprise.
++
+Now generally available, the updated UI library composites and components include call reactions. The UI Library supports the following list of live call reactions: &#128077; like reaction, &#129505; heart reaction, &#128079; applause reaction, &#128514; laughter reaction, &#128558; surprise reaction.
+
+Call reactions are associated with the participant sending it and are visible to all types of participants (in-tenant, guest, federated, anonymous). Call reactions are supported in all types of calls such as Rooms, groups, and meetings (scheduled, private, channel) of all sizes (small, large, extra-large).
+
+Adding this feature encourages greater engagement within calls, as people can now react in real time without needing to speak or interrupt.
+
+- The ability to have live call reactions added to `CallComposite` and `CallwithChatComposite` on web.
+- Call reactions added at the component level.
+
+For more information, see [Reactions](./how-tos/calling-sdk/reactions.md).
+
+### Closed Captions
+
+Promote accessibility by displaying text of the audio in video calls. Already available for app-to-Teams calls, this general availability release adds support for closed captions in all app-to-app calls.
++
+For more information, see [Closed Captions overview](./concepts/voice-video-calling/closed-captions.md).
+
+You can also learn more about [Azure Communication Services interoperability with Teams](./concepts/teams-interop.md).
+
+### Copilot for Call Diagnostics
+
+AI can help app developers across every step of the development lifecycle: designing, building, and operating. Developers with [Microsoft Copilot for Azure (public preview)](/azure/copilot/overview) can use Copilot within Call Diagnostics to understand and resolve many calling issues. For example, developers can ask Copilot questions, such as:
+
+- How do I run network diagnostics in Azure Communication Services VoIP calls?
+- How can I optimize my calls for poor network conditions?
+- How do I fix common causes of poor media streams in Azure Communication calls?
+- How can I fix the subcode 41048, which caused the video part of my call to fail?
++
+Developers can use Call Diagnostics to understand call quality and reliability across the organization to deliver a great customer calling experience. Many issues can affect the quality of your calls, such as poor internet connectivity, software compatibility issues, and technical difficulties with devices.
+
+Getting to the root cause of these issues can alleviate potentially frustrating situations for all call participants, whether they're a patient checking in for a doctorΓÇÖs call, or a student taking a lesson with their teacher. Call Diagnostics enables developers to drill down into the data to identify root problems and find a solution. You can use the built-in visualizations in Azure portal or connect underlying usage and quality data to your own systems.
+
+For more information, see [Call Diagnostics](./concepts/voice-video-calling/call-diagnostics.md).
+
+## April 2024
+
+### Business-to-consumer extensibility with Microsoft Teams for Calling
+
+Now in general availability, developers can take advantage of calling interoperability for Microsoft Teams users in Azure Communication Services Calling workflows.
+
+Developers can use [Call Automation APIs](./concepts/call-automation/call-automation.md) to bring Teams users into business-to-consumer (B2C) calling workflows and interactions, helping you deliver advanced customer service solutions. This interoperability is offered over VoIP to reduce telephony infrastructure overhead. Developers can add Teams users to Azure Communication Services calls using the participant's Entra object ID (OID).
+
+#### Use Cases
+
+- **Teams as an extension of agent desktop**: Connect your CCaaS solution to Teams and enable your agents to handle customer calls on Teams. Having Teams as the single-pane-of-glass solution for both internal and B2C communication increases agent productivity and empowers them to deliver first-class service to customers.
+
+- **Expert Consultation**: Businesses can use Teams to invite subject matter experts into their customer service workflows for expedient issue resolution and improve first call resolution rate.
++
+Azure Communication Services B2C extensibility with Microsoft Teams makes it easy for customers to reach sales and support teams and for businesses to deliver effective customer experiences.
+
+For more information, see [Call Automation workflows interop with Microsoft Teams](./concepts/call-automation/call-automation-teams-interop.md).
+
+### Image Sharing in Microsoft Teams meetings
+
+Microsoft Teams users can now share images with Azure Communication Services users in the context of a Teams meeting. This feature is now generally available. Image sharing enhances collaboration in real time for meetings. Image overlay is also supported for users to look at it in detail.
+
+Image sharing is helpful in many scenarios, such as a business sharing photos showcasing their work or doctors sharing images with patients for after care instructions.
++
+Try out this feature using either our UI Library or the Chat SDK. The SDK is available in C# (.NET), JavaScript, Python, and Java:
+
+- [Enable inline image using UI Library in Teams Meetings](./tutorials/inline-image-tutorial-interop-chat.md)
+- [Sample: Image Sharing](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-jointeamsmeeting--join-teams-meeting#adding-image-sharing)
+
+### Deep Noise Suppression for Desktop
+
+Deep noise suppression is currently in public preview. Noise suppression improves VoIP and video calls by eliminating background noise, making it easier to talk and listen. For example, if you're taking an Azure Communication Services WebJS call in a coffee shop with considerable noise, turning on noise suppression can significantly improve the calling experience by eliminating the background noise from the shop.
+
+For more information, see [Add audio quality enhancements to your audio calling experience](./tutorials/audio-quality-enhancements/add-noise-supression.md).
+
+### Calling native SDKs for Android, iOS, and Windows
+
+We updated the Calling native SDKs to improve the customer experience. This release includes:
+
+- Custom background for video calls
+- Proxy configuration
+- Android TelecomManager
+- Unidirectional Data Channel
+- Time To Live lifespan for push notifications
+
+#### Custom background for video calls
+
+Custom background for video calls is generally available. This feature enables customers to remove distractions behind them. The custom image backgrounds feature enables customers to upload their own personalized images for use as background.
++
+For example, business owners can use the Calling SDK to show custom backgrounds in place of the actual background. You can, for example, upload an image of a modern and spacious office and set it as its background for video calls. Anyone who joins the call sees the customized background, which looks realistic and natural. You can also use custom branding images as background to show a fresh image to your customers.
+
+For more information, see [QuickStart: Add video effects to your video calls](./quickstarts/voice-video-calling/get-started-video-effects.md).
+
+#### Proxy configuration
-Azure Communication Services Call Diagnostics (CD) is a new feature that helps developers troubleshoot and improve their voice & video calling applications. It's an Azure Monitor experience that offers specialized telemetry and diagnostic pages in the Azure portal. With Call Diagnostics, developers can easily access and analyze data, visualizations, and insights for each call, and identify and resolve issues that affect the end-user experience. Call Diagnostics works with other ACS features, such as noise suppression and pre-call troubleshooting, to deliver beautiful, reliable video calling experiences that are easy to develop and operate. Call Diagnostics is now available in Public Preview. Try it today and see how Azure can help you make every call a success. 🚀
+Proxy configuration is now generally available. Some environments such as highly regulated industries or those dealing with confidential information require proxies to secure and control network traffic. You can use the Calling SDK to configure the HTTP and media proxies for your Azure Communication Services calls. This way, you can ensure that your communications are compliant with the network policies and regulations. You can use the native SDK methods to set the proxy configuration for your app.
+For more information, see [Tutorial: Proxy your calling traffic](./tutorials/proxy-calling-support-tutorial.md?pivots=platform-android).
+#### Android TelecomManager
-[Read the documentation.](./concepts/voice-video-calling/call-diagnostics.md)
+Android TelecomManager manages audio and video calls on Android devices. Use Android TelecomManager to provide a consistent user experience across different Android apps and devices, such as showing incoming and outgoing calls in the system UI, routing audio to the device, and handling call interruptions. Now you can integrate your app with the Android TelecomManager to take advantage of its features for your custom calling scenarios.
+For more information, see [Integrate with TelecomManager on Android](./how-tos/calling-sdk/telecommanager-integration.md).
-<br>
-<br>
+#### Unidirectional Data Channel
+The Data Channel API is generally available. Data Channel includes unidirectional communication, which enables real-time messaging during audio and video calls. Using this API, you can integrate data exchange functions into the applications, providing a seamless communication experience for users. The Data Channel API enables users to instantly send and receive messages during an ongoing audio or video call, promoting smooth and efficient communication. In group call scenarios, a participant can send messages to a single participant, a specific set of participants, or all participants within the call. This flexibility enhances communication and collaboration among users during group interactions.
-### Email Simple Mail Transfer Protocol (SMTP) as Service
+For more information, see [Data Channel](./concepts/voice-video-calling/data-channel.md).
-Azure Communication Services Email Simple Mail Transfer Protocol (SMTP) as a Service is now in public preview. This service allows you to send emails from your line of business applications using a cloud-based SMTP relay that is secure, reliable, and compliant. You can use Microsoft Entra Application ID to authenticate your SMTP requests and apply the power of Exchange as a transport. Whether you need to send high-volume B2C communications or occasional notifications, this service can meet your needs and expectations.
+#### Time To Live lifespan for push notifications
+The Time To Live (TTL) for push notifications is now generally available. TTL is the duration for which a push notification token is valid. Using a longer duration TTL can help your app reduce the number of new token requests from your users and improve the experience.
+For example, suppose you created an app that enables patients to book virtual medical appointments. The app uses push notifications to display incoming call UI when the app isn't in the foreground. Previously, the app had to request a new push notification token from the user every 24 hours, which could be annoying and disruptive. With the extended TTL feature, you can now configure the push notification token to last for up to six months, depending on your business needs. This way, the app can avoid frequent token requests and provide a smoother calling experience for your customers.
-[Read the documentation.](./concepts/email/email-smtp-overview.md)
+For more information, see [TTL token in Enable push notifications for calls](./how-tos/calling-sdk/push-notifications.md).
+### Calling SDK native UI Library updates
-<br>
-<br>
+This update includes Troubleshooting on the native UI Library for Android and iOS, and Audio only mode in the UI Library.
-### Azure AI-powered Azure Communication Services Call Automation API Actions
+Using the Azure Communication Services Calling SDK native UI Library, you can now generate encrypted logs for troubleshooting and provide your customers with an optional Audio only mode for joining calls.
-Azure AI-powered Call Automation API actions are now generally available for developers who want to create enhanced calling workflows using Azure AI Speech-to-Text, Text-to-Speech and other language understanding engines. These actions allow developers to play dynamic audio prompts and recognize voice input from callers, enabling natural conversational experiences and more efficient task handling. Developers can use these actions with any of the four major SDKs - .NET, Java, JavaScript and Python - and integrate them with their Azure OpenAI solutions to create virtual assistants that go beyond simple IVRs. You can learn more about this release and its capabilities from the Microsoft Ignite 2023 announcements blog and on-demand session.
+#### Troubleshooting on the native UI Library for Android and iOS
-[Read more in the Ignite Blog post.](https://techcommunity.microsoft.com/t5/azure-communication-services/ignite-2023-creating-value-with-intelligent-application/ba-p/3907629)
+Now in general availability, you can encrypt logs when troubleshooting on the Calling SDK native UI Library for Android and iOS. You can easily generate encrypted logs to share with Azure support. While ideally calls just work, or developers self-remediate issues, customers always have Azure support as a last-line-of-defense. And we strive to make those engagements as easy and fast as possible.
-[View the on-demand session from Ignite.](https://ignite.microsoft.com/en-US/sessions/18ac73bd-2d06-4b72-81d4-67c01ecb9735?source=sessions)
+For more information, see [Troubleshoot the UI Library](./how-tos/ui-library-sdk/troubleshooting.md).
-[Read the documentation.](./concepts/call-automation/call-automation.md)
+#### Audio only mode in the UI Library
-[Try the quickstart.](./quickstarts/call-automation/quickstart-make-an-outbound-call.md)
+The Audio only mode in the Calling SDK UI Library is now generally available. It enables participants to join calls using only their audio, without sharing or receiving video. Participants can use this feature to conserve bandwidth and maximize privacy. When activated, the Audio only mode automatically disables the video function for both sending and receiving streams and adjusts the UI to reflect this change by removing video-related controls.
-[Try a sample application.](./samples/call-automation-ai.md)
-<br>
-<br>
+For more information, see [Enable audio only mode in the UI Library](./how-tos/ui-library-sdk/audio-only-mode.md).
+## March 2024
-### Job Router
+### Calling to Microsoft Teams Call Queues and Auto Attendants
+Azure Communication Services Calling to Teams call queues and auto attendants and click-to-call for Teams Phone are now generally available. Organizations can enable customers to easily reach their sales and support members on Microsoft Teams with just a single click. When you add a [click-to-call widget](./tutorials/calling-widget/calling-widget-tutorial.md) onto a website, such as a **Sales** button that points to a sales department, or a **Purchase** button that points to procurement, customers are just one click away from a direct connection into a Teams call queue or auto attendant.
+
+Learn more about joining your calling app to a Teams [call queue](./quickstarts/voice-video-calling/get-started-teams-call-queue.md) or [auto attendant](./quickstarts/voice-video-calling/get-started-teams-auto-attendant.md), and about [building contact center applications](./tutorials/contact-center.md).
+
+### Email Updates
+
+Updates to Azure Communication Services Email service:
+
+- SMTP
+- Opt-out management
+- PowerShell cmdlets
+- CLI extension
+
+#### SMTP
+
+SMTP as a Service for Email is now generally available. Developers can use the SMTP support in Azure Communication Services to easily send emails, improve security features, and have more control over outgoing communications.
+
+The SMTP Relay Service acts as a link between email clients and mail servers and helps deliver emails more effectively. It sets up a specialized relay infrastructure that not only handles higher throughput needs and successful email delivery, but also improves authentication to protect communication. This service also offers businesses a centralized platform that lets them manage outgoing emails for all B2C communications and get insights into email traffic.
+
+With this capability, customers can switch from on-premises SMTP solutions or link their line of business applications to a cloud-based solution platform with Azure Communication Services Email. SMTP as a Service enables:
+
+- Secure and reliable SMTP endpoint with TLS 1.2 encryptions
+- Access with Microsoft Entra Application ID to secure authentication for sending emails using SMTP.
+- High volume sending support for B2C communications using SMTP and REST APIs.
+- The security and compliance to honor and respect data handling and privacy requirements that Azure promises to our customers.
++
+Learn more about [SMTP as a Service](./concepts/email/email-smtp-overview.md).
+
+#### Opt-out Management
+
+Email opt-out management, now in public preview, offers a powerful platform with a centralized managed unsubscribe list and opt-out preferences saved to our data store. This feature helps developers meet guidelines of email providers who often require one-click list-unsubscribe implementation in the emails sent from their platforms. Opt-out Management helps you identify and avoid significant delivery problems. You can maintain compliance by adding suppression list features to help improve reputation and enable customers to easily manage opt-outs.
++
+Get started with [Manage email opt-out capabilities](./concepts/email/email-optout-management.md).
+
+#### PowerShell Cmdlets & CLI extension
+
+##### PowerShell Cmdlets
+
+To enhance the developer experience, Azure Communication Services is introducing more PowerShell cmdlets and Azure CLI extensions for working with Azure Communication Service Email.
+
+With the addition of these new cmdlets developers can now use Azure PowerShell cmdlets for all CRUD operations for Email Service including:
+
+- Create Communication Service Resource (existing)
+- Create Email Service Resource (new)
+- Create Domain (Azure Managed or Custom Domain) Resource (new)
+- Initiate/Cancel Custom Domain verification (new)
+- Add a sender username to a domain (new)
+- Link a Domain Resource to a Communication Service Resource (existing)
+
+Learn more at [PowerShell cmdlets](/powershell/module/az.communication/).
+
+##### Azure CLI extension for Email Service Resources management
+
+Developers can use Azure CLI extensions for their end-to-end send email flow including:
+
+- Create Communication Service Resource (existing)
+- Create Email Service Resource (new)
+- Create Domain (Azure Managed or Custom Domain) Resource (new)
+- Add a sender username to a domain (new)
+- Link a Domain Resource to a Communication Service Resource (existing)
+- Send an Email (existing)
+
+Learn more in [Extensions](/cli/azure/communication/email).
+
+## February 2024
+
+### Limited Access User Tokens
+
+New, limited access user tokens are now in general availability. Limited access user tokens enable customers to exercise finer grain control over user capabilities such as to start a new call/chat or participate in an ongoing call/chat.
+
+When a customer creates an Azure Communication Services user identity, the user is granted the capability to participate in chats or calls, using access tokens. For example, a user must have a chat-token to participate in chat threads. Similarly, a VoIP token is required to participate in VoIP call. A user can have multiple tokens simultaneously.
+
+With the limited access tokens, Azure Communication Services supports controlling full access versus limited access within chat and calling. Customers can now control the userΓÇÖs ability to initiate a new call or chat as opposed to participating in existing calls or chats.
+
+These tokens solve the cold-call or cold-chat issue. For example, without limited access tokens if a user has VoIP token, they can initiate calls and participate in calls. So theoretically, a defendant could call a judge directly or a patient could call a doctor directly. This is undesirable for most businesses. With new limited access tokens, developers are able to give a limited access token to a patient so they can join a call but can't initiate a direct call to anyone.
+
+For more information, see [Identity model](./concepts/identity-model.md).
+
+### Try Phone Calling
+
+Try Phone Calling, now in public preview, is a tool in Azure portal that helps customers confirm the setup of a telephony connection by making a phone call. It applies to both Voice Calling (PSTN) and direct routing. Try Phone Calling enables developers to quickly test Azure Communication Services calling capabilities, without an existing app or code on their end.
++
+Learn more about [Try Phone Calling](./concepts/telephony/try-phone-calling.md).
-Job Router APIs are now generally available for developers who want to use Azure Communication Services to create personalized customer experiences across multiple communication channels. These APIs allow developers to classify, queue, and distribute jobs to the most suitable workers based on various routing rules, using any of the three major SDKs - .NET, JavaScript, and Python. You can learn more about Job Router and how to use it with Azure AI Services from the Ignite 2023 announcement blog and prerecorded video.
-[Read more in the Ignite Blog post.](https://techcommunity.microsoft.com/t5/azure-communication-services/ignite-2023-creating-value-with-intelligent-application/ba-p/3907629)
+### UI Native Library Updates
+
+Updates to the UI Native Library including moving User facing diagnostics to general availability and releasing 1:1 Calling and an iOS CallKit integration.
+
+### User Facing Diagnostics
+
+User Facing Diagnostics (UFD) is now available in general availability. User Facing Diagnostics enhance the user experience by providing a set of events that can be triggered when some signal of the call is triggered, for example, when some participant is talking but the microphone is muted, or if the device isn't connected to a network. Developers can subscribe to triggers such as weak network signals or muted microphones, ensuring that you're always aware of any factors impacting your calls.
+
+By bringing UFD into the UI Library, we help customers implement events. This provides a more fluid experience. Customers can use UFDs to notify end-users in real time if they face connectivity and quality issues during the call. Issues can include muted microphones, network issues, or other problems. Customers receive a toast notification during the call to indicate quality issues. This also helps by sending telemetry to help you track any event and review the call status.
+
+For more information, see [User Facing Diagnostics](./concepts/voice-video-calling/user-facing-diagnostics.md).
+
+### 1:1 Calling
+
+One-on-one calling for Android and iOS is now available. With this latest public preview release, starting a call is as simple as a tap. Recipients are promptly alerted with a push notification to answer or decline the call. If the iOS native application requires direct calling between two entities, developers can use the 1:1 calling function to make it happen. For example, a client needing to make a call to their financial advisor to make account changes. This feature is currently in public preview version 1.6.0.
-[View the on-demand session from Ignite.](https://ignite.microsoft.com/en-US/sessions/18ac73bd-2d06-4b72-81d4-67c01ecb9735?source=sessions)
+For more information, see [Set up one-to-one calling and push notifications in the UI Library](./how-tos/ui-library-sdk/one-to-one-calling.md).
-[Read the documentation.](./concepts/router/concepts.md)
+### iOS CallKit Integrations
-[Try the quickstart.](./quickstarts/router/get-started-router.md)
-<br>
-<br>
+Azure Communication Services seamlessly integrates CallKit, in public preview, for a native iOS call experience. Now, calls made through the Native UI SDK have the same iOS calling features such as notification, call history, and call on hold. These iOS features blend perfectly with the existing native experience.
-### Azure Bot Support
+UI Library developers can use this integration to avoid spending time on integration. The iOS CallKit provides an out of the box experience, meaning that integrated apps use the same interfaces as regular cellular calls. For end-users, incoming VoIP calls display the familiar iOS call screen, providing a consistent and intuitive experience.
+For more information, see [Integrate CallKit into the UI Library](./how-tos/ui-library-sdk/callkit.md).
-With this release, you can use Azure bots to enhance your Chat service and integrate with Azure AI services. This helps you automate routine tasks for your agents, such as getting customer information and answering frequently asked questions. This way, your agents can focus on complex queries and assist more customers.
+### PSTN Direct Offers
-[Try the quickstart.](./quickstarts/chat/quickstart-botframework-integration.md)
-[Read more about how to use adaptive cards.](https://adaptivecards.io/samples/)
-<br>
-<br>
+Azure Communication Services has continued to expand Direct Offers to new geographies. We just launched PSTN Direct Offers in general availability for 42 countries.
+The full list of countries where we offer PSTN Direct Offers:
-### Managed Identities in Public Preview
+Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Colombia, Denmark, Finland, France, Germany, Hong Kong, Indonesia, Ireland, Israel, Italy, Japan, Luxembourg, Malaysia, Mexico, Netherlands, New Zealand, Norway, Philippines, Poland, Portugal, Puerto Rico, Saudi Arabia, Singapore, Slovakia, South Africa, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, UAE (United Arab Emirates), United Kingdom, and United States
-Azure Communication Services now supports Azure Managed Identities, which are a feature of Microsoft Entra ID (formerly Azure Active Directory (Azure AD)) that allow resources to securely authenticate with other Azure services that support Entra authentication. Managed Identities is an [Azure Enterprise Promise](/entra/identity/managed-identities-azure-resources/overview) that improves security and simplifies workflows for customers, as they don't need to embed security credentials into their code. Managed Identities can be used in Azure Communication Services for various scenarios, such as connecting Cognitive Services, Azure Storage, and Key-Vault. You can learn more about this feature and how to use it from the Ignite 2023 announcement blog and prerecorded video.
+In addition to getting all current offers into general availability, we have introduced over 400 new cross-country offers.
-[Try the quickstart.](./how-tos/managed-identity.md)
+Check all the new countries, phone number types, and capabilities at [Country/regional availability of telephone numbers and subscription eligibility](./concepts/numbers/sub-eligibility-number-capability.md).
+## January 2024
-<br>
+### Dial out to a PSTN number
-## Blog posts and case studies
-Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication
-Services features.
+Virtual Rooms support VoIP audio and video calling. Now you can also dial out PSTN numbers and include the PSTN participants in an ongoing call. Virtual Rooms empowers developers to exercise control over PSTN dial out capability in two ways. Developers can not only enable/disable PSTN dial-out capability for specific Virtual Rooms but can also control which users in Virtual Rooms can initiate PSTN dial-out. Only the users with Presenter role can initiate a PSTN Dial-out ensuring secure and structured communication.
-### Azure Communication Services at DEVintersection & Microsoft Azure + AI Conference
+For more information, see [Quickstart: Create and manage a room resource](./quickstarts/rooms/get-started-rooms.md).
-Microsoft employees Shawn Henry and Dan Wahlin presented at the DEVintersection & Microsoft Azure + AI conference in Orlando, FL. Shawn and Dan presented four separate sessions plus a workshop. The sessions were:
+### Remote mute call participants
-- Transform Customer Experiences with AI-assisted Voice, Video and Chat-- Azure for WhatsApp, SMS, and Email Integration: A Developer's Guide-- Take Your Apps to the Next Level with AI, Communication, and Organizational Data-- Integrate Services Across the Microsoft Cloud to Enhance User Collaboration
+Participants can now mute other participants in Virtual Rooms calls. Previously, participants in Virtual Rooms calls could only mute/unmute themselves. There are times when you want to mute other participants due to background noise or if someoneΓÇÖs microphone is left unmuted.
-and the workshop was entitled "Integrate OpenAI, Communication, and Organizational Data Features into Line of Business Apps"
+Participants in the Presenter role can mute a participant, multiple participants, or all other participants. Users retain the ability to unmute themselves as needed. For privacy reasons, no one can unmute other participants.
+For more information, see [Mute other participants](./how-tos/calling-sdk/manage-calls.md#mute-other-participants).
+
+### Call Recording in Virtual Rooms
+
+Developers can now start, pause, and stop call recording in calls conducted in Virtual Rooms. Call Recording is a service-side capability, and developers start, pause, stop recording using server-side API calls. This feature enables invited participants who might not make the original session to view the recording and stay up-to-date asynchronously.
+
+For more information, see [Manage call recording on the client](./how-tos/calling-sdk/record-calls.md).
+
+### Closed Captions in Virtual Rooms
+
+Closed captions is the conversion of a voice or video call audio track into written words that appear in real time. Closed captions are also a useful tool for participants who prefer to read the audio text in order to engage more actively in conversations and meetings. Closed captions also help in scenarios where participants might be in noisy environments or have audio equipment problems.
+
+Closed captions are never saved and are only visible to the user that enabled it.
++
+For more information, see [Closed Captions overview](./concepts/voice-video-calling/closed-captions.md).
++
+## December 2023
+
+### Call Diagnostics
+
+Azure Communication Services Call Diagnostics (CD) is available in Public Preview. Call Diagnostics help developers troubleshoot and improve their voice and video calling applications.
+
-[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-at-devintersection-amp-amp/ba-p/3999834)
+Call Diagnostics is an Azure Monitor experience that offers specialized telemetry and diagnostic pages in the Azure portal. With Call Diagnostics, you can access and analyze data, visualizations, and insights for each call. Then you can identify and resolve issues that affect the end-user experience.
-[Read more about the conference](https://devintersection.com/#!/)
+Call Diagnostics works with other ACS features, such as noise suppression and pre-call troubleshooting, to deliver beautiful, reliable video calling experiences that are easy to develop and operate. Call Diagnostics is now available in Public Preview. Try it today and see how Azure can help you make every call a success.
-### Ignite 2023: Creating value with intelligent application solutions for B2C communications
+For more information, see [Call Diagnostics](./concepts/voice-video-calling/call-diagnostics.md).
-Read a summary of all of the new features we announced at Ignite, including Azure AI Speech, Job Router and Azure AI Services!
+### WebJS Calling Updates
+Several WebJS Calling features moved to general availability: Media Quality Statics, Video Constraints, and Data Channel.
-[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/ignite-2023-creating-value-with-intelligent-application/ba-p/3907629)
+#### Media Quality Statistics
+Developers can leverage the Media Quality Statistics API to better understand their video calling quality and reliability experience real time from within the calling SDK. By giving developers the ability to understand from the client side what their end customers are experiencing they can delve deeper into understanding and mitigating any issues that arise for their end users.
-<br>
-<br>
+ For more information, see [Media Quality Statistics](./concepts/voice-video-calling/media-quality-sdk.md).
+#### Video Constraints
-### View of new features from November and December 2023
+Developers can use Video Constraints to better manage the overall quality of calls. For example, if a developer knows that a participant has a poor internet connection, the developer can limit video resolution size on the sender side to use less bandwidth. The result is an improved calling experience for the participant.
-[View the complete list of all features launched in November and December.](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-december-2023-feature-updates/ba-p/4003567) of all new features added to Azure Communication Services in December.
+Improve your calling experience as described in [Quickstart: Set video constraints in your calling app](./quickstarts/voice-video-calling/get-started-video-constraints.md).
+#### Data Channel
+The Data Channel API enables real-time messaging during audio and video calls. This function enables developers to manage their own data pipeline and send their own unique messages to remote participants on a call. The data channel enhances communication capabilities by enabling local participants to connect directly to remote participants when the scenario requires.
-<br>
-<br>
+Get started with [Quickstart: Add Data Channel messaging to your calling app](./quickstarts/voice-video-calling/get-started-data-channel.md).
-<br>
+## Related articles
-Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub. For more blog posts, as they're released, visit the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog)
+For a complete list of new features and bug fixes, see the [releases page](https://github.com/Azure/Communication/releases) on GitHub. For more blog posts, see the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog).
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
Based on whether your workflow is [Consumption or Standard](../logic-apps/logic-
| Property | JSON name | Required | Type | Description | |-|--|-||-|
- | **Time zone** | `timeZone` | No | String | Applies only when you specify a start time because this trigger doesn't accept [UTC offset](https://en.wikipedia.org/wiki/UTC_offset). Select the time zone that you want to apply. |
+ | **Time zone** | `timeZone` | No | String | Applies only when you specify a start time because this trigger doesn't accept [UTC offset](https://en.wikipedia.org/wiki/UTC_offset). Select the time zone that you want to apply. For more information, see [Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones#time-zones). |
| **Start time** | `startTime` | No | String | Provide a start date and time, which has a maximum of 49 years in the future and must follow the [ISO 8601 date time specification](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) in [UTC date time format](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), but without a [UTC offset](https://en.wikipedia.org/wiki/UTC_offset): <br><br>YYYY-MM-DDThh:mm:ss if you select a time zone <br><br>-or- <br><br>YYYY-MM-DDThh:mm:ssZ if you don't select a time zone <br><br>So for example, if you want September 18, 2020 at 2:00 PM, then specify "2020-09-18T14:00:00" and select a time zone such as Pacific Standard Time. Or, specify "2020-09-18T14:00:00Z" without a time zone. <br><br>**Important:** If you don't select a time zone, you must add the letter "Z" at the end without any spaces. This "Z" refers to the equivalent [nautical time](https://en.wikipedia.org/wiki/Nautical_time). If you select a time zone value, you don't need to add a "Z" to the end of your **Start time** value. If you do, Logic Apps ignores the time zone value because the "Z" signifies a UTC time format. <br><br>For simple schedules, the start time is the first occurrence, while for complex schedules, the trigger doesn't fire any sooner than the start time. [*What are the ways that I can use the start date and time?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time) | | **On these days** | `weekDays` | No | String or string array | If you select "Week", you can select one or more days when you want to run the workflow: **Monday**, **Tuesday**, **Wednesday**, **Thursday**, **Friday**, **Saturday**, and **Sunday** | | **At these hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day. <br><br>**Note**: By default, the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. |
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
You learn how to:
> - Create an Azure Blob Storage for use as a Dapr state store > - Deploy a Container Apps environment to host container apps > - Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them
-> - Assign a user-assigned identity to a container app and supply it with the appropiate role assignment to authenticate to the Dapr state store
+> - Assign a user-assigned identity to a container app and supply it with the appropriate role assignment to authenticate to the Dapr state store
> - Verify the interaction between the two microservices. With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities.
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Previously updated : 09/29/2022 Last updated : 08/21/2024 ms.devlang: azurecli
az containerapp create \
--env-vars 'APP_PORT=3000' ```
+If you're using an Azure Container Registry, include the `--registry-server <REGISTRY_NAME>.azurecr.io` flag in the command.
+ # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
$ServiceArgs = @{
New-AzContainerApp @ServiceArgs ```
+If you're using an Azure Container Registry, include the `RegistryServer = '<REGISTRY_NAME>.azurecr.io'` flag in the command.
+ By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-node).
az containerapp create \
--dapr-app-id pythonapp ```
+If you're using an Azure Container Registry, include the `--registry-server <REGISTRY_NAME>.azurecr.io` flag in the command.
+ # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
$ClientArgs = @{
New-AzContainerApp @ClientArgs ```
+If you're using an Azure Container Registry, include the `RegistryServer = '<REGISTRY_NAME>.azurecr.io'` flag in the command.
+ ## Verify the results
container-apps Sessions Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-custom-container.md
az containerapp sessionpool create \
--network-status EgressDisabled \ --max-sessions 10 \ --ready-sessions 5 \
- --env-vars "key1=value1" "key2=value2"
+ --env-vars "key1=value1" "key2=value2" \
+ --location <LOCATION>
``` This command creates a session pool with the following settings:
This command creates a session pool with the following settings:
| `--max-sessions` | `10` | The maximum number of sessions that can be allocated at the same time. | | `--ready-sessions` | `5` | The target number of sessions that are ready in the session pool all the time. Increase this number if sessions are allocated faster than the pool is being replenished. | | `--env-vars` | `"key1=value1" "key2=value2"` | The environment variables to set in the container. |
+| `--location` | `"Supported Location"` | The location of the session pool. |
To update the session pool, use the `az containerapp sessionpool update` command.
copilot Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/manage-access.md
Title: Manage access to Microsoft Copilot in Azure description: Learn how administrators can manage user access to Microsoft Copilot in Azure. Previously updated : 08/13/2024 Last updated : 08/21/2024
# Manage access to Microsoft Copilot in Azure
-> [!NOTE]
-> We're currently in the process of rolling out Microsoft Copilot in Azure (preview) to all Azure tenants. We'll remove this note once the functionality is available to all users.
- By default, Copilot in Azure is available to all users in a tenant. However, [Global Administrators](/entra/identity/role-based-access-control/permissions-reference#global-administrator) can manage access to Copilot in Azure for their organization. Access can also be optionally granted to specific Microsoft Entra users or groups. If Copilot in Azure is not available for a user, they'll see an unauthorized message when they select the **Copilot** button in the Azure portal.
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Previously updated : 02/26/2024 Last updated : 08/21/2024
If your enrollment still has credits, you see the following error that prevents
`Select another enrollment. This enrollment still has credits and can't be transitioned to a billing account.`
-If your new billing profile doesn't have the new plan enabled, you see the following error. You must enable the plan before you can transition your enrollment.
+If your new billing profile doesn't have the new plan enabled, you see the following error. You must enable the plan before you can transition your enrollment. Contact your Microsoft representative for assistance.
`Select another Billing Profile. The current selection does not have Azure Plan and Azure dev test plan enabled on it.`
Support benefits don't transfer as part of the transition. Purchase a new suppor
Charges and credits balance before the transition can be viewed in your Enterprise Agreement enrollment through the Azure portal.
-### When should the setup be completed?
+### When should the setup get completed?
Complete the setup of your billing account before your Enterprise Agreement enrollment expires. If your enrollment expires, services in your Azure subscriptions continue to run without disruption. However, you're charged pay-as-you-go rates for the services.
The transition can't be reverted. Once the billing of your Azure subscriptions i
### Closing your browser during setup
-Before you select **Start transition**, you can close the browser. You can come back to the setup using the link you got in the email and start the transition. If you close the browser, after the transition is started, your transition will keep on running. Come back to the transition status page to monitor the latest status of your transition. You get an email when the transition is completed.
+Before you select **Start transition**, you can close the browser. You can come back to the setup using the link you got in the email and start the transition. If you close the browser, after the transition is started, your transition will keep on running. To monitor the latest status of your transition, return to the transition status page. You get an email when the transition is completed.
## Complete the setup in the Azure portal
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Azure has the following policies for cancellations, exchanges, and refunds.
**Refund policies** - We're currently not charging an early termination fee, but in the future there might be a 12% early termination fee for cancellations.-- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It gets refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund is 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit increases by 2,400 USD. Your new pool is 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment depletes the same pool, and the same replenishment logic applies.
+- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It gets refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund is 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit increases by 2,400 USD. Your new pool is 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment depletes the same pool, and the same replenishment logic applies. This example also applies to the monthly payment method.
- Azure doesn't process any refund that exceeds the 50,000 USD limit in a 12-month window for a billing profile or EA enrollment. - Refunds that result from an exchange don't count against the refund limit. - Refunds are calculated based on the lowest price of either your purchase price or the current price of the reservation.
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-portfolio-overview-os-support.md
Most of the Linux Operating Systems (OS) are covered by both agents. The agents
| Ubuntu 20.04 | Γ£ô | Γ£ô | Γ£ô | | Ubuntu 22.04 | Γ£ô | | |
-The Defender for IoT micro agent also supports Yocto as an open source.
- For a more granular view of the micro agent-operating system dependencies, see [Linux dependencies](concept-micro-agent-linux-dependencies.md#linux-dependencies). ## Eclipse ThreadX micro agent
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-azure-signalr.md
description: Learn how to stream Azure Digital Twins telemetry to clients using Azure SignalR Previously updated : 06/21/2022 Last updated : 08/21/2024
Here are the prerequisites you should complete before proceeding:
Be sure to sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, as you'll need to use it in this guide.
-## Solution architecture
-
-You'll be attaching Azure SignalR Service to Azure Digital Twins through the path below. Sections A, B, and C in the diagram are taken from the architecture diagram of the [end-to-end tutorial prerequisite](tutorial-end-to-end.md). In this how-to article, you'll build section D on the existing architecture, which includes two new Azure functions that communicate with SignalR and client apps.
--
-## Download the sample applications
+### Download the sample applications
First, download the required sample apps. You'll need both of the following samples: * [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* that holds two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [Connect an end-to-end solution](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
First, download the required sample apps. You'll need both of the following samp
* [SignalR integration web app sample](/samples/azure-samples/digitaltwins-signalr-webapp-sample/digital-twins-samples/): This sample React web app will consume Azure Digital Twins telemetry data from an Azure SignalR Service. - Navigate to the sample link and use the same download process to download a copy of the sample to your machine, as *digitaltwins-signalr-webapp-sample-main.zip*. Unzip the folder.
+## Solution architecture
+
+You'll be attaching Azure SignalR Service to Azure Digital Twins through the path below. Sections A, B, and C in the diagram are taken from the architecture diagram of the [end-to-end tutorial prerequisite](tutorial-end-to-end.md). In this how-to article, you'll build section D on the existing architecture, which includes two new Azure functions that communicate with SignalR and client apps.
++
+## Create Azure SignalR instance
+
+Next, create an Azure SignalR instance to use in this article by following the instructions in [Create an Azure SignalR Service instance](../azure-signalr/signalr-quickstart-azure-functions-csharp.md#create-an-azure-signalr-service-instance) (for now, only complete the steps in this section).
Leave the browser window open to the Azure portal, as you'll use it again in the next section.
Start Visual Studio or another code editor of your choice, and open the code sol
1. In Visual Studio's **Package Manager Console** window, or any command window on your machine, navigate to the folder *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp*, and run the following command to install the `SignalRService` NuGet package to the project: ```cmd
- dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService --version 1.2.0
+ dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService --version 1.14.0
``` Running this command should resolve any dependency issues in the class.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
To publish the function app to Azure, you'll need to create a storage account, t
1. Next, you'll zip up the functions and publish them to your new Azure function app.
- 1. Open a console window on your machine, and navigate into the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp* folder inside your downloaded sample project.
+ 1. Open a console window on your machine (if you're using the local Azure CLI, it can be the same window), and navigate into the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp* folder inside your downloaded sample project.
1. In the console, run the following command to publish the project locally:
To publish the function app to Azure, you'll need to create a storage account, t
:::image type="content" source="media/tutorial-end-to-end/publish-zip.png" alt-text="Screenshot of File Explorer in Windows showing the contents of the publish zip folder.":::
- Now you can close the local console window that you used to prepare the project. The last step will be done in the Azure CLI.
+ The last step will be done in the Azure CLI.
1. In the Azure CLI, run the following command to deploy the published and zipped functions to your Azure function app:
education-hub About Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/about-education-hub.md
You can easily adjust the amount of allocated credit within each subscription or
## Prerequisites
-To access the Education Hub, you must first receive an email notification that confirms your approval for an academic grant and contains your approved credit amount.
+To access the Education Hub, you must first receive an email notification that confirms your approval for an academic sponsorship and contains your approved credit amount.
## Related content
education-hub Access Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/access-education-hub.md
# Access the Azure Education Hub
-The email that you received to confirm your approval for an academic grant includes a [link to the Azure Education Hub](https://aka.ms/startedu) and an approved credit amount. Most sponsored academic offers include a Developer tier of Azure support for free.
+The email that you received to confirm your approval for an academic sponsorship includes a [link to the Azure Education Hub](https://aka.ms/startedu) and an approved credit amount. Most sponsored academic offers include a Developer tier of Azure support for free.
-To use your academic grant, you must create a lab in the Education Hub and use subscriptions within the lab that will access your Azure credit. Start by signing in to the [Azure portal](https://portal.azure.com) so you can access the Education Hub.
+To use your academic sponsorship, you must create a lab in the Education Hub and use subscriptions within the lab that will access your Azure credit. Start by signing in to the [Azure portal](https://portal.azure.com) so you can access the Education Hub.
## Authenticate through the Azure portal
education-hub Add Student Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/add-student-api.md
The API response includes information about the students in the lab:
## Related content -- [Manage your academic grant by using the Overview page](hub-overview-page.md)
+- [Manage your academic sponsorship by using the Overview page](hub-overview-page.md)
- [Learn about support options](educator-service-desk.md)
education-hub Create Assignment Allocate Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-assignment-allocate-credit.md
After you set up a lab in the Azure Education Hub, you can add students and allo
## Prerequisites -- An academic grant with an approved credit amount
+- An academic sponsorship with an approved credit amount
- A work or school account and a subscription within the course that will access your Azure credit ### Accounts
education-hub Create Lab Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-lab-education-hub.md
The API response shows the details:
## Related content - [Add students to a lab](add-student-api.md)-- [Manage your academic grant by using the Overview page](hub-overview-page.md)
+- [Manage your academic sponsorship by using the Overview page](hub-overview-page.md)
- [Learn about support options](educator-service-desk.md)
education-hub Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/faq.md
Your Azure for Students Starter subscription gives you access to certain softwar
[Microsoft Learn training](/training/) is a free online learning platform that helps you learn Azure technologies at your own pace. Learning paths contain modules that start with the basics and then move to advanced methods that address real-world challenges.
-## Azure Academic Grant
+## Azure Classroom
### How do I start using my Azure course credits?
-You can access your Azure course credits by creating a new Azure Academic Grant subscription. Select the **Activate** button in the sponsorship approval email.
+You can access your Azure course credits by creating a new Azure Academic Sponsorship subscription. Select the **Activate** button in the sponsorship approval email.
You can also convert an existing subscription to the Azure Sponsorship offer to access your credits. To convert your subscription, contact [Azure support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
education-hub Find Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/find-ids.md
You must have an Azure account linked with the Education Hub.
## Related content -- [Manage your academic grant by using the Overview page](hub-overview-page.md)
+- [Manage your academic sponsorship by using the Overview page](hub-overview-page.md)
- [Learn about support options](educator-service-desk.md)
education-hub Full Api Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/full-api-script.md
ConvertTo-Json $response
## Related content -- [Manage your academic grant by using the Overview page](hub-overview-page.md)
+- [Manage your academic sponsorship by using the Overview page](hub-overview-page.md)
- [Learn about support options](educator-service-desk.md)
education-hub Get Started Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/get-started-education-hub.md
Before you access the Azure Education Hub you must complete signup by clicking t
## Related content -- [Manage your academic grant by using the Overview page](hub-overview-page.md)
+- [Manage your academic sponsorship by using the Overview page](hub-overview-page.md)
- [Support options](educator-service-desk.md)
education-hub Hub Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/hub-overview-page.md
Title: Manage your academic grant in the Azure Education Hub
-description: Learn how to view and update the details of your academic grant on the Overview page of the Azure Education Hub.
+ Title: Manage your academic sponsorship in the Azure Education Hub
+description: Learn how to view and update the details of your academic sponsorship on the Overview page of the Azure Education Hub.
Last updated 08/07/2024
-# Manage your academic grant in the Azure Education Hub
+# Manage your academic sponsorship in the Azure Education Hub
-Your main landing page in the Azure Education Hub is the **Overview** page. This page contains all the relevant information about your academic grant, such as the number of labs that you established and your total running credit allocated and used from those labs. It also displays other resources available to help you get started with allocating credits and tracking your spend.
+Your main landing page in the Azure Education Hub is the **Overview** page. This page contains all the relevant information about your academic sponsorship, such as the number of labs that you established and your total running credit allocated and used from those labs. It also displays other resources available to help you get started with allocating credits and tracking your spend.
## Overview page
education-hub Set Up Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/set-up-lab.md
In this quickstart, you create a lab in the Azure Education Hub and choose the m
## Prerequisites -- An academic grant with an approved credit amount
+- An academic sponsorship with an approved credit amount
## Create a lab
event-grid Blob Event Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-bicep.md
# Quickstart: Route Blob storage events to web endpoint by using Bicep
-Azure Event Grid is an eventing service for the cloud. In this article, you use a Bicep file to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
+In this article, you use a Bicep file to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
+
+> [!NOTE]
+> If you are new to Azure Event Grid, see [What's Azure Event Grid](overview.md) to get an overview of the service before going through this tutorial.
[!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)]
If you don't have an Azure subscription, create a [free account](https://azure.m
### Create a message endpoint
-Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [prebuilt web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
1. Select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters. [Deploy to Azure](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json)
-1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+1. The deployment can take a few minutes to complete. After the deployment succeeds, view your web app to make sure it's running. In a web browser, navigate to:
`https://<your-site-name>.azurewebsites.net`
-1. You see the site but no events have been posted to it yet.
+1. You see the site but no events are posted to it yet.
![Screenshot that shows how to view the new site.](./media/blob-event-quickstart-portal/view-site.png)
Two Azure resources are defined in the Bicep file:
## Validate the deployment
-View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+View your web app again, and notice that a subscription validation event was sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
![Screenshot that shows how to view a subscription event.](./media/blob-event-quickstart-portal/view-subscription-event.png) Now, let's trigger an event to see how Event Grid distributes the message to your endpoint.
-You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The articles assumes you have a file named testfile.txt, but you can use any file.
+You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The article assumes you have a file named testfile.txt, but you can use any file.
When you upload the file to the Azure Blob storage, Event Grid sends a message to the endpoint you configured when subscribing. The message is in the JSON format and it contains an array with one or more events. In the following example, the JSON message contains an array with one event. View your web app and notice that a blob created event was received.
When you upload the file to the Azure Blob storage, Event Grid sends a message t
When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group).
-## Next steps
+## Related content
For more information about Azure Resource Manager templates and Bicep, see the following articles: * [Azure Resource Manager documentation](../azure-resource-manager/index.yml) * [Define resources in Azure Resource Manager templates](/azure/templates/)
-* [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/)
* [Azure Event Grid templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Eventgrid).
event-grid Blob Event Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-template.md
# Quickstart: Route Blob storage events to web endpoint by using an ARM template
-Azure Event Grid is an eventing service for the cloud. In this article, you use an Azure Resource Manager template (ARM template) to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
+In this article, you use an Azure Resource Manager template (ARM template) to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
+
+> [!NOTE]
+> If you are new to Azure Event Grid, see [What's Azure Event Grid](overview.md) to get an overview of the service before going through this tutorial.
+ [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
If you don't have an Azure subscription, create a [free account](https://azure.m
### Create a message endpoint
-Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [prebuilt web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
1. Select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters. [Deploy to Azure](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json)
-1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+1. The deployment can take a few minutes to complete. After the deployment succeeds, view your web app to make sure it's running. In a web browser, navigate to:
`https://<your-site-name>.azurewebsites.net`
-1. You see the site but no events have been posted to it yet.
+1. You see the site but no events are posted to it yet.
![View new site](./media/blob-event-quickstart-portal/view-site.png)
View your web app again, and notice that a subscription validation event has bee
Now, let's trigger an event to see how Event Grid distributes the message to your endpoint.
-You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The articles assumes you have a file named testfile.txt, but you can use any file.
+You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The article assumes you have a file named testfile.txt, but you can use any file.
When you upload the file to the Azure Blob storage, Event Grid sends a message to the endpoint you configured when subscribing. The message is in the JSON format and it contains an array with one or more events. In the following example, the JSON message contains an array with one event. View your web app and notice that a blob created event was received.
For more information about Azure Resource Manager templates, see the following a
* [Azure Resource Manager documentation](../azure-resource-manager/index.yml) * [Define resources in Azure Resource Manager templates](/azure/templates/)
-* [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/)
+* [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/)
* [Azure Event Grid templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Eventgrid).
event-grid Create View Manage Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespace-topics.md
This article shows you how to create, view, and manage namespace topics in Azure
1. Follow the [create, view and manage namespaces](create-view-manage-namespaces.md) steps to identify the namespace you want to use to create the topic.
-2. Once you are in the resource, click on the **Topics** option in the **Eventing** section.
+2. Once you are in the resource, click on the **Topics** option in the **Event broker** section.
:::image type="content" source="media/create-view-manage-namespace-topics/namespace-topics.png" alt-text="Screenshot showing Event Grid namespace topic section.":::
This article shows you how to create, view, and manage namespace topics in Azure
1. Follow the [create, view, and manage namespaces](create-view-manage-namespaces.md) steps to identify the namespace you want to use to view the topic.
-2. Click on the **Topics** option in the **Eventing** section.
+2. Click on the **Topics** option in the **Event broker** section.
:::image type="content" source="media/create-view-manage-namespace-topics/namespace-topics.png" alt-text="Screenshot showing Event Grid namespace topic section.":::
This article shows you how to create, view, and manage namespace topics in Azure
1. Follow the [create, view, and manage namespaces](create-view-manage-namespaces.md) steps to identify the namespace you want to use to delete the topic.
-2. Click on the **Topics** option in the **Eventing** section.
+2. Click on the **Topics** option in the **Event broker** section.
:::image type="content" source="media/create-view-manage-namespace-topics/namespace-topics.png" alt-text="Screenshot showing Event Grid namespace topic section.":::
event-grid Custom Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart-powershell.md
# Quickstart: Route custom events to web endpoint with PowerShell and Event Grid
-Azure Event Grid is an eventing service for the cloud. In this article, you use the Azure PowerShell to create a custom topic, subscribe to the topic, and trigger the event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
+In this article, you use the Azure PowerShell to create a custom topic, subscribe to the topic, and trigger the event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
+
+> [!NOTE]
+> If you are new to Azure Event Grid, see [What's Azure Event Grid](overview.md) to get an overview of the service before going through this tutorial.
When you're finished, you see that the event data has been sent to the web app.
New-AzResourceGroup -Name gridResourceGroup -Location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<your-topic-name>` with a unique name for your topic. The topic name must be unique because it's part of the DNS entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<your-topic-name>` with a unique name for your topic. The topic name must be unique because it's part of the Domain Name System (DNS) entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
```powershell-interactive $topicname="<your-topic-name>"
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart.md
Title: 'Quickstart: Send custom events with Event Grid and Azure CLI'
-description: 'Quickstart Use Azure Event Grid and Azure CLI to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.'
+description: 'Quickstart uses Azure Event Grid and Azure CLI to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.'
Last updated 01/05/2024 # Quickstart: Route custom events to web endpoint with Azure CLI and Event Grid
-Azure Event Grid is an eventing service for the cloud. In this article, you use the Azure CLI to create a custom topic, subscribe to the custom topic, and trigger the event to view the result.
+In this article, you use the Azure CLI to create a custom topic in Azure Event Grid, subscribe to the custom topic, and trigger the event to view the result.
++
+> [!NOTE]
+> If you are new to Azure Event Grid, see [What's Azure Event Grid](overview.md) to get an overview of the service before going through this tutorial.
+ Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
event-grid Custom Event To Hybrid Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-hybrid-connection.md
# Tutorial: Route custom events to Azure Relay Hybrid Connections with Azure CLI and Event Grid
-Azure Event Grid is an eventing service for the cloud. Azure Relay Hybrid Connections is one of the supported event handlers. You use hybrid connections as the event handler when you need to process events from applications that don't have a public endpoint. These applications might be within your corporate enterprise network. In this article, you use the Azure CLI to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to the hybrid connection.
+Azure Relay Hybrid Connections is one of the supported event handlers. You use hybrid connections as the event handler when you need to process events from applications that don't have a public endpoint. These applications might be within your corporate enterprise network. In this article, you use the Azure CLI to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to the hybrid connection.
++
+> [!NOTE]
+> If you are new to Azure Event Grid, see [What's Azure Event Grid](overview.md) to get an overview of the service before going through this tutorial.
+ ## Prerequisites
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a DNS entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a Domain Name System (DNS) entry.
```azurecli-interactive az eventgrid topic create --name <topic_name> -l westus2 -g gridResourceGroup
event-grid Event Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-domains.md
An event domain provides an endpoint for thousands of individual topics related
Domains also give you authentication and authorization control over each topic so you can partition your tenants. This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Use event domains to:
-* Manage multitenant eventing architectures at scale.
+* Manage multitenant event-driven architectures at scale.
* Manage your authentication and authorization. * Partition your topics without managing each individually. * Avoid individually publishing to each of your topic endpoints.
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
You create a Microsoft Graph API subscription to enable Graph API events to flow
Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/webhooks#receiving-change-notifications) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements: -- You're developing an event-driven solution that requires events from Microsoft Entra ID, Outlook, Teams, etc. to react to resource changes. You require the robust eventing model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md).
+- You're developing an event-driven solution that requires events from Microsoft Entra ID, Outlook, Teams, etc. to react to resource changes. You require the robust event-driven model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md).
- You want to use Event Grid to route events to multiple destinations using a single Graph API subscription and you want to avoid managing multiple Graph API subscriptions. - You require to route events to different downstream applications, webhooks, or Azure services depending on some of the properties in the event. For example, you might want to route event types such as `Microsoft.Graph.UserCreated` and `Microsoft.Graph.UserDeleted` to a specialized application that processes users' onboarding and off-boarding. You might also want to send `Microsoft.Graph.UserUpdated` events to another application that syncs contacts information, for example. You can achieve that using a single Graph API subscription when using Event Grid as a notification destination. For more information, see [event filtering](event-filtering.md) and [event handlers](event-handlers.md). - Interoperability is important to you. You want to forward and handle events in a standard way using Cloud Native Computing Foundation (CNCF) [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specification standard.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
> * [Locations By Provider](expressroute-locations.md) > * [Providers By Location](expressroute-locations-providers.md) - The tables in this article provide information on ExpressRoute geographical coverage and locations, ExpressRoute connectivity providers, and ExpressRoute System Integrators (SIs). > [!NOTE]
The tables in this article provide information on ExpressRoute geographical cove
## Azure regions
-Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When you create an Azure resource, you need to select a resource location. The resource location determines which Azure datacenter or availability zone the resource gets created in.
+Azure regions are global datacenters where Azure compute, networking, and storage resources are hosted. When creating an Azure resource, you need to select the resource location, which determines the specific Azure datacenter (or availability zone) where the resource is deployed.
## ExpressRoute locations
-ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are colocation facilities where Microsoft Enterprise edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location doesn't need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
-You have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
+ExpressRoute locations, also known as peering locations or meet-me locations, are co-location facilities where Microsoft Enterprise Edge (MSEE) devices are situated. These locations serve as the entry points to Microsoft's network and are globally distributed, offering the ability to connect to Microsoft's network worldwide. ExpressRoute partners and ExpressRoute Direct user establish cross connections to Microsoft's network at these locations. Generally, the ExpressRoute location doesn't need to correspond with the Azure region. For instance, you can create an ExpressRoute circuit with the resource location in *East US* for the *Seattle* peering location.
+
+You have access to Azure services across all regions within a geopolitical region if you're connecting to at least one ExpressRoute location within the geopolitical region.
[!INCLUDE [expressroute-azure-regions-geopolitical-region](../../includes/expressroute-azure-regions-geopolitical-region.md)]
You have access to Azure services across all regions within a geopolitical regio
The following table shows connectivity locations and the service providers for each location. If you want to view service providers and the locations for which they can provide service, see [Locations by service provider](expressroute-locations.md).
-* **Local Azure Regions** refers to the regions that can be accessed by [ExpressRoute Local](expressroute-faqs.md#expressroute-local) at each peering location. **n/a** indicates that ExpressRoute Local isn't available at that peering location.
+* **Local Azure Regions** refers to the regions that can be accessed by [ExpressRoute Local](expressroute-faqs.md#expressroute-local) at each peering location. **&cross;** indicates that ExpressRoute Local isn't available at that peering location.
* **Zone** refers to [pricing](https://azure.microsoft.com/pricing/details/expressroute/).
The following table shows connectivity locations and the service providers for e
### Global commercial Azure
+#### [A-C](#tab/a-c)
+
+| Location | Address | Zone | Local Azure regions | ER Direct | Service providers |
+|--|--|--|--|--|--|
+| **Abu Dhabi** | Etisalat KDC | 3 | UAE Central | &check; | |
+| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | &check; | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>Colt<br/>Deutsche Telekom AG<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>GlobalConnect<br/>InterCloud<br/>Interxion (Digital Realty)<br/>KPN<br/>IX Reach<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>Tata Communications<br/>Telecom Italia Sparkle<br/>Telefonica<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | &check; | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cinia<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion (Digital Realty)<br/>Megaport<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone |
+| **Atlanta** | [Equinix AT1](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/atlanta-data-centers/at1) | 1 | &cross; | &check; | Equinix<br/>Megaport<br/>Momentum Telecom<br/>PacketFabric |
+| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | &cross; | &check; | Devoli<br/>Kordia<br/>Megaport<br/>REANNZ<br/>Spark NZ<br/>Vocus Group NZ |
+| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | &cross; | &check; | AIS<br/>National Telecom UIH |
+| **Berlin** | [NTT GDC](https://services.global.ntt/en-us/newsroom/ntt-ltd-announces-access-to-microsoft-azure-expressroute-at-ntts-berlin-1-data-center) | 1 | Germany North | &check; | Colt<br/>Equinix<br/>NTT Global DataCenters EMEA |
+| **Busan** | [LG CNS](https://www.lgcns.com/business/cloud/datacenter/) | 2 | Korea South | &cross; | LG CNS |
+| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | &check; | Ascenty |
+| **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | &check; | CDC<br/>Telstra Corporation |
+| **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2 | &check; | CDC<br/>Equinix |
+| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | &check; | BCX<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Global Connect<br/>Teraco<br/>Vodacom |
+| **Chennai** | Tata Communications | 2 | South India | &check; | BSNL<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>Lightstorm<br/>SIFY<br/>Tata Communications<br/>VodafoneIdea |
+| **Chennai2** | Airtel | 2 | South India | &check; | Airtel |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | &check; | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Equinix<br/>InterCloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>PacketFabric<br/>PCCW Global Limited<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | &check; | CoreSite<br/>DE-CIX |
+| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | &cross; | &check; | DE-CIX<br/>GlobalConnect<br/>Interxion (Digital Realty) |
+
+#### [D-I](#tab/d-h)
+
+| Location | Address | Zone | Local Azure regions | ER Direct | Service providers |
+|--|--|--|--|--|--|
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/)<br/>[Equinix DA6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/dallas-data-centers/da6) | 1 | &cross; | &check; | Aryaka Networks<br/>AT&T Connectivity Plus<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>GTT<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>MCM Telecom<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Telefonica<br/>Transtelco<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Dallas2** | [Digital Realty DFW10](https://www.digitalrealty.com/data-centers/americas/dallas/dfw10) | 1 | &cross; | &check; | Digital Realty |
+| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | &check; | CoreSite<br/>Megaport<br/>PacketFabric<br/>Zayo |
+| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | &check; | Ooredoo Cloud Connect<br/>Vodafone |
+| **Doha2** | [Ooredoo](https://www.ooredoo.qa/) | 3 | Qatar Central | &check; | Ooredoo Cloud Connect |
+| **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | &check; | Etisalat UAE |
+| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | &cross; | DE-CIX<br/>du datamena<br/>Equinix<br/>GBI<br/>Lightstorm<br/>Megaport<br/>Orange<br/>Orixcom |
+| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | &check; | CenturyLink Cloud Connect<br/>Colt<br/>eir<br/>Equinix<br/>GEANT<br/>euNetworks<br/>Interxion (Digital Realty)<br/>Megaport<br/>Zayo |
+| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | &check; | InterCloud<br/>Interxion (Digital Realty)<br/>KPN<br/><br/>Megaport<br/>NL-IX<br/>Orange |
+| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | &check; | AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GBI<br/>GEANT<br/>InterCloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Telia Carrier<br/>T-Systems<br/>Verizon<br/>Zayo |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | &check; | DE-CIX<br/>Deutsche Telekom AG<br/>Equinix<br/>InterCloud<br/>Telefonica |
+| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | &check; | Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>Swisscom |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | &check; | Aryaka Networks<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Chief Telecom<br/>China Telecom Global<br/>China Unicom Global<br/>Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
+| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | &check; | China Mobile International<br/>China Telecom Global<br/>Deutsche Telekom AG<br/>Equinix<br/>iAdvantage<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Vodafone |
+
+#### [J-M](#tab/j-m)
+
+| Location | Address | Zone | Local Azure regions | ER Direct | Service providers |
+|--|--|--|--|--|--|
+| **Jakarta** | [Telin](https://www.telin.net/) | 4 | &cross; | &check; | NTT Communications<br/>Telin<br/>XL Axiata |
+| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | &check; | BCX<br/>British Telecom<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Business<br/>MTN Global Connect<br/>Orange<br/>Teraco<br/>Vodacom |
+| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | &cross; | &cross; | DE-CIX<br/>TIME dotCom |
+| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | &cross; | &check; | CenturyLink Cloud Connect<br/>Megaport<br/>PacketFabric |
+| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | &check; | AT&T NetBond<br/>Bezeq International<br/>British Telecom<br/>CenturyLink<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intelsat<br/>InterCloud<br/>Internet Solutions - Cloud Connect<br/>Interxion (Digital Realty)<br/>Jisc<br/>Level 3 Communications<br/>Megaport<br/>MTN<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telehouse - KDDI<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | &check; | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Epsilon Global Communications<br/>GTT<br/>Interxion (Digital Realty)<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone |
+| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | &cross; | &check; | AT&T Dynamic Exchange<br/>CoreSite<br/>China Unicom Global<br/>Cloudflare<br/>Equinix*<br/>Megaport<br/>Neutrona Networks<br/>NTT<br/>Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | &cross; | &check; | Crown Castle<br/>Equinix<br/>GTT<br/>PacketFabric |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | &cross; | &check; | DE-CIX<br/>InterCloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>Telefonica |
+| **Madrid2** | [Equinix MD2](https://www.equinix.com/data-centers/europe-colocation/spain-colocation/madrid-data-centers/md2) | 1 | &cross; | &check; | Equinix |
+| **Marseille** | [Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | &cross; | Colt<br/>DE-CIX<br/>GEANT<br/>Interxion (Digital Realty)<br/>Jaguar Network<br/>Ooredoo Cloud Connect |
+| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | &check; | AARNet<br/>Devoli<br/>Equinix<br/>Megaport<br/>NETSG<br/>NEXTDC<br/>Optus<br/>Orange<br/>Telstra Corporation<br/>TPG Telecom |
+| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | &cross; | &check; | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>PitChile |
+| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | Italy North | &check; | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Noovle<br/>Retelit<br/>Vodafone |
+| **Milan2** | [DATA4](https://www.data4group.com/it/data-center-a-milano-italia/) | 1 | Italy North | &check; | |
+| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | &cross; | &check; | Cologix<br/>Megaport |
+| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/)<br/>[Cologix MTL7](https://cologix.com/data-centers/montreal/mtl7/) | 1 | &cross; | &check; | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>RISQ<br/>Telus<br/>Zayo |
+| **Mumbai** | Tata Communications | 2 | West India | &check; | BSNL<br/>British Telecom<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>InterCloud<br/>Lightstorm<br/>Reliance Jio<br/>Sify<br/>Tata Communications<br/>Verizon |
+| **Mumbai2** | Airtel | 2 | West India | &check; | Airtel<br/>Equinix<br/>Sify<br/>Orange<br/>Vodafone Idea |
+| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | &cross; | &check; | Colt<br/>DE-CIX<br/>Megaport |
+
+#### [N-Q](#tab/n-q)
+ | Location | Address | Zone | Local Azure regions | ER Direct | Service providers | |--|--|--|--|--|--|
-| **Abu Dhabi** | Etisalat KDC | 3 | UAE Central | Supported | |
-| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>Colt<br/>Deutsche Telekom AG<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>GlobalConnect<br/>InterCloud<br/>Interxion (Digital Realty)<br/>KPN<br/>IX Reach<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>Tata Communications<br/>Telecom Italia Sparkle<br/>Telefonica<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cinia<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion (Digital Realty)<br/>Megaport<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone |
-| **Atlanta** | [Equinix AT1](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/atlanta-data-centers/at1) | 1 | n/a | Supported | Equinix<br/>Megaport<br/>Momentum Telecom<br/>PacketFabric |
-| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli<br/>Kordia<br/>Megaport<br/>REANNZ<br/>Spark NZ<br/>Vocus Group NZ |
-| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS<br/>National Telecom UIH |
-| **Berlin** | [NTT GDC](https://services.global.ntt/en-us/newsroom/ntt-ltd-announces-access-to-microsoft-azure-expressroute-at-ntts-berlin-1-data-center) | 1 | Germany North | Supported | Colt<br/>Equinix<br/>NTT Global DataCenters EMEA |
-| **Busan** | [LG CNS](https://www.lgcns.com/business/cloud/datacenter/) | 2 | Korea South | n/a | LG CNS |
-| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | Supported | Ascenty |
-| **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC<br/>Telstra Corporation |
-| **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2 | Supported | CDC<br/>Equinix |
-| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | Supported | BCX<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Global Connect<br/>Teraco<br/>Vodacom |
-| **Chennai** | Tata Communications | 2 | South India | Supported | BSNL<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>Lightstorm<br/>SIFY<br/>Tata Communications<br/>VodafoneIdea |
-| **Chennai2** | Airtel | 2 | South India | Supported | Airtel |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Equinix<br/>InterCloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>PacketFabric<br/>PCCW Global Limited<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
-| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite<br/>DE-CIX |
-| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | DE-CIX<br/>GlobalConnect<br/>Interxion (Digital Realty) |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/)<br/>[Equinix DA6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/dallas-data-centers/da6) | 1 | n/a | Supported | Aryaka Networks<br/>AT&T Connectivity Plus<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>GTT<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>MCM Telecom<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Telefonica<br/>Transtelco<br/>Verizon<br/>Vodafone<br/>Zayo |
-| **Dallas2** | [Digital Realty DFW10](https://www.digitalrealty.com/data-centers/americas/dallas/dfw10) | 1 | n/a | Supported | Digital Realty |
-| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite<br/>Megaport<br/>PacketFabric<br/>Zayo |
-| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect<br/>Vodafone |
-| **Doha2** | [Ooredoo](https://www.ooredoo.qa/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect |
-| **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE |
-| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX<br/>du datamena<br/>Equinix<br/>GBI<br/>Lightstorm<br/>Megaport<br/>Orange<br/>Orixcom |
-| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect<br/>Colt<br/>eir<br/>Equinix<br/>GEANT<br/>euNetworks<br/>Interxion (Digital Realty)<br/>Megaport<br/>Zayo |
-| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | InterCloud<br/>Interxion (Digital Realty)<br/>KPN<br/><br/>Megaport<br/>NL-IX<br/>Orange |
-| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | Supported | AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GBI<br/>GEANT<br/>InterCloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Telia Carrier<br/>T-Systems<br/>Verizon<br/>Zayo |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX<br/>Deutsche Telekom AG<br/>Equinix<br/>InterCloud<br/>Telefonica |
-| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Chief Telecom<br/>China Telecom Global<br/>China Unicom Global<br/>Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
-| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International<br/>China Telecom Global<br/>Deutsche Telekom AG<br/>Equinix<br/>iAdvantage<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Vodafone |
-| **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications<br/>Telin<br/>XL Axiata |
-| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX<br/>British Telecom<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Business<br/>MTN Global Connect<br/>Orange<br/>Teraco<br/>Vodacom |
-| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | DE-CIX<br/>TIME dotCom |
-| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect<br/>Megaport<br/>PacketFabric |
-| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond<br/>Bezeq International<br/>British Telecom<br/>CenturyLink<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intelsat<br/>InterCloud<br/>Internet Solutions - Cloud Connect<br/>Interxion (Digital Realty)<br/>Jisc<br/>Level 3 Communications<br/>Megaport<br/>MTN<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telehouse - KDDI<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Epsilon Global Communications<br/>GTT<br/>Interxion (Digital Realty)<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone |
-| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>CoreSite<br/>China Unicom Global<br/>Cloudflare<br/>Equinix*<br/>Megaport<br/>Neutrona Networks<br/>NTT<br/>Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
-| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Crown Castle<br/>Equinix<br/>GTT<br/>PacketFabric |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | n/a | Supported | DE-CIX<br/>InterCloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>Telefonica |
-| **Madrid2** | [Equinix MD2](https://www.equinix.com/data-centers/europe-colocation/spain-colocation/madrid-data-centers/md2) | 1 | n/a | Supported | Equinix |
-| **Marseille** | [Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt<br/>DE-CIX<br/>GEANT<br/>Interxion (Digital Realty)<br/>Jaguar Network<br/>Ooredoo Cloud Connect |
-| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet<br/>Devoli<br/>Equinix<br/>Megaport<br/>NETSG<br/>NEXTDC<br/>Optus<br/>Orange<br/>Telstra Corporation<br/>TPG Telecom |
-| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>PitChile |
-| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | Italy North | Supported | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Noovle<br/>Retelit<br/>Vodafone |
-| **Milan2** | [DATA4](https://www.data4group.com/it/data-center-a-milano-italia/) | 1 | Italy North | Supported | |
-| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | n/a | Supported | Cologix<br/>Megaport |
-| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/)<br/>[Cologix MTL7](https://cologix.com/data-centers/montreal/mtl7/) | 1 | n/a | Supported | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>RISQ<br/>Telus<br/>Zayo |
-| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL<br/>British Telecom<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>InterCloud<br/>Lightstorm<br/>Reliance Jio<br/>Sify<br/>Tata Communications<br/>Verizon |
-| **Mumbai2** | Airtel | 2 | West India | Supported | Airtel<br/>Equinix<br/>Sify<br/>Orange<br/>Vodafone Idea |
-| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt<br/>DE-CIX<br/>Megaport |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect<br/>Coresite<br/>Crown Castle<br/>DE-CIX<br/>Equinix<br/>InterCloud<br/>Lightpath<br/>Megaport<br/>Momentum Telecom<br/>NTT Communications<br/>Packet<br/>Zayo |
-| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | Supported | British Telecom<br/>Colt<br/>Jisc<br/>Level 3 Communications<br/>Next Generation Data |
-| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO<br/>BBIX<br/>Colt<br/>DE-CIX<br/>Equinix<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT SmartConnect<br/>Softbank<br/>Tokai Communications |
-| **Oslo** | DigiPlex Ulven | 1 | Norway East | Supported | GlobalConnect<br/>Megaport<br/>Telenor<br/>Telia Carrier |
-| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intercloud<br/>Interxion<br/>Jaguar Network<br/>Megaport<br/>Orange<br/>Telia Carrier<br/>Zayo<br/>Verizon |
-| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix<br/>InterCloud<br/>Orange |
-| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix<br/>Megaport<br/>NextDC |
-| **Phoenix** | [EdgeConneX PHX01](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | 1 | West US 3 | Supported | AT&T NetBond<br/>Cox Business Cloud Port<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Megaport<br/>Zayo |
-| **Phoenix2** | [PhoenixNAP](https://phoenixnap.com/) | 1 | West US 3 | Supported | |
-| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | |
-| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | Supported | Airtel<br/>Lightstorm<br/>Tata Communications |
-| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |
-| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Cirion Technologies<br/>Equinix<br/>MCM Telecom<br/>Megaport<br/>Transtelco |
-| **Quincy** | Sabey Datacenter - Building A | 1 | West US 2 | Supported | |
-| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Cirion Technologies<br/>Equinix |
-| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect<br/>Megaport<br/>Zayo |
-| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | Cirion Technologies<br/>PitChile |
-| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks<br/>Ascenty Data Centers<br/>British Telecom<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Neutrona Networks<br/>Orange<br/>RedCLARA<br/>Tata Communications<br/>Telefonica<br/>UOLDIVEO |
-| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers<br/>Tivit |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Digital Realty<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>Pacific Northwest Gigapop<br/>PacketFabric<br/>Telus<br/>Zayo |
-| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX<br/>KT<br/>LG CNS<br/>LGUplus<br/>Equinix<br/>Sejong Telecom<br/>SK Telecom |
-| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT |
-| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Digital Realty<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
-| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt<br/>Coresite |
-| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>China Mobile International<br/>Epsilon Global Communications<br/>Equinix<br/>GTT<br/>InterCloud<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>SingTel<br/>Tata Communications<br/>Telstra Corporation<br/>Telefonica<br/>Verizon<br/>Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Epsilon Global Communications<br/>Equinix<br/>Lightstorm<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Telehouse - KDDI |
-| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported | GlobalConnect<br/>Megaport<br/>Telenor |
-| **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | Supported | Cinia<br/>Equinix<br/>GlobalConnect<br/>Interxion (Digital Realty)<br/>Megaport<br/>Telia Carrier |
-| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet<br/>AT&T NetBond<br/>British Telecom<br/>Cello<br/>Devoli<br/>Equinix<br/>GTT<br/>Kordia<br/>Megaport<br/>NEXTDC<br/>NTT Communications<br/>Optus<br/>Orange<br/>Spark NZ<br/>Telstra Corporation<br/>TPG Telecom<br/>Verizon<br/>Vocus Group NZ |
-| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport<br/>NETSG<br/>NextDC |
-| **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom<br/>Chunghwa Telecom<br/>FarEasTone |
-| **Taipei2** | Chunghwa Telecom | 2 | n/a | Supported | |
-| **Tel Aviv** | Bezeq International | 2 | Israel Central | Supported | Bezeq International |
-| **Tel Aviv2** | SDS | 2 | Israel Central | Supported | |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks<br/>AT&T NetBond<br/>BBIX<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT EAST<br/>Orange<br/>Softbank<br/>Telehouse - KDDI<br/>Verizon </br></br> |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
-| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | NEC<br/>SCSK |
-| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo |
-| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire<br/>Zayo |
-| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo |
-| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix<br/>Exatel<br/>Orange Poland<br/>T-mobile Poland |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Digital Realty<br/>Equinix<br/>IPC<br/>Internet2<br/>InterCloud<br/>IPC<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
-| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | n/a | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Momentum Telecom<br/>Viasat<br/>Zayo |
-| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt<br/>Equinix<br/>Intercloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>Swisscom<br/>Zayo |
-| **Zurich2** | [Equinix ZH5](https://www.equinix.com/data-centers/europe-colocation/switzerland-colocation/zurich-data-centers/zh5) | 1 | Switzerland North | Supported | Equinix |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | &cross; | &check; | CenturyLink Cloud Connect<br/>Coresite<br/>Crown Castle<br/>DE-CIX<br/>Equinix<br/>InterCloud<br/>Lightpath<br/>Megaport<br/>Momentum Telecom<br/>NTT Communications<br/>Packet<br/>Zayo |
+| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | &check; | British Telecom<br/>Colt<br/>Jisc<br/>Level 3 Communications<br/>Next Generation Data |
+| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | &check; | AT TOKYO<br/>BBIX<br/>Colt<br/>DE-CIX<br/>Equinix<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT SmartConnect<br/>Softbank<br/>Tokai Communications |
+| **Oslo** | DigiPlex Ulven | 1 | Norway East | &check; | GlobalConnect<br/>Megaport<br/>Telenor<br/>Telia Carrier |
+| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | &check; | British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intercloud<br/>Interxion<br/>Jaguar Network<br/>Megaport<br/>Orange<br/>Telia Carrier<br/>Zayo<br/>Verizon |
+| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | &check; | Equinix<br/>InterCloud<br/>Orange |
+| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | &cross; | &check; | Equinix<br/>Megaport<br/>NextDC |
+| **Phoenix** | [EdgeConneX PHX01](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | 1 | West US 3 | &check; | AT&T NetBond<br/>Cox Business Cloud Port<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Megaport<br/>Zayo |
+| **Phoenix2** | [PhoenixNAP](https://phoenixnap.com/) | 1 | West US 3 | &check; | |
+| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | &check; | |
+| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | &check; | Airtel<br/>Lightstorm<br/>Tata Communications |
+| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | &check; | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |
+| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | &cross; | &check; | Cirion Technologies<br/>Equinix<br/>MCM Telecom<br/>Megaport<br/>Transtelco |
+| **Quincy** | Sabey Datacenter - Building A | 1 | West US 2 | &check; | |
++
+#### [R-S](#tab/r-s)
+
+| Location | Address | Zone | Local Azure regions | ER Direct | Service providers |
+|--|--|--|--|--|--|
+| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | &check; | Cirion Technologies<br/>Equinix |
+| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | &check; | CenturyLink Cloud Connect<br/>Megaport<br/>Zayo |
+| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | &cross; | &check; | Cirion Technologies<br/>PitChile |
+| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | &check; | Aryaka Networks<br/>Ascenty Data Centers<br/>British Telecom<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Neutrona Networks<br/>Orange<br/>RedCLARA<br/>Tata Communications<br/>Telefonica<br/>UOLDIVEO |
+| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | &check; | Ascenty Data Centers<br/>Tivit |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | &check; | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Digital Realty<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>Pacific Northwest Gigapop<br/>PacketFabric<br/>Telus<br/>Zayo |
+| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | &check; | KINX<br/>KT<br/>LG CNS<br/>LGUplus<br/>Equinix<br/>Sejong Telecom<br/>SK Telecom |
+| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | &cross; | KT |
+| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | &check; | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Digital Realty<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | &check; | Colt<br/>Coresite |
+| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | &check; | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>China Mobile International<br/>Epsilon Global Communications<br/>Equinix<br/>GTT<br/>InterCloud<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>SingTel<br/>Tata Communications<br/>Telstra Corporation<br/>Telefonica<br/>Verizon<br/>Vodafone |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | &check; | CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Epsilon Global Communications<br/>Equinix<br/>Lightstorm<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Telehouse - KDDI |
+| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | &check; | GlobalConnect<br/>Megaport<br/>Telenor |
+| **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | &check; | Cinia<br/>Equinix<br/>GlobalConnect<br/>Interxion (Digital Realty)<br/>Megaport<br/>Telia Carrier |
+| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | &check; | AARNet<br/>AT&T NetBond<br/>British Telecom<br/>Cello<br/>Devoli<br/>Equinix<br/>GTT<br/>Kordia<br/>Megaport<br/>NEXTDC<br/>NTT Communications<br/>Optus<br/>Orange<br/>Spark NZ<br/>Telstra Corporation<br/>TPG Telecom<br/>Verizon<br/>Vocus Group NZ |
+| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | &check; | Megaport<br/>NETSG<br/>NextDC |
+
+#### [T-Z](#tab/t-z)
+
+| Location | Address | Zone | Local Azure regions | ER Direct | Service providers |
+|--|--|--|--|--|--|
+| **Taipei** | Chief Telecom | 2 | &cross; | &check; | Chief Telecom<br/>Chunghwa Telecom<br/>FarEasTone |
+| **Taipei2** | Chunghwa Telecom | 2 | &cross; | &check; | |
+| **Tel Aviv** | Bezeq International | 2 | Israel Central | &check; | Bezeq International |
+| **Tel Aviv2** | SDS | 2 | Israel Central | &check; | |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | &check; | Aryaka Networks<br/>AT&T NetBond<br/>BBIX<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT EAST<br/>Orange<br/>Softbank<br/>Telehouse - KDDI<br/>Verizon </br></br> |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | &check; | AT TOKYO<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
+| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | &check; | NEC<br/>SCSK |
+| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | &check; | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo |
+| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | &check; | Fibrenoire<br/>Zayo |
+| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | &cross; | &check; | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo |
+| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | &check; | Equinix<br/>Exatel<br/>Orange Poland<br/>T-mobile Poland |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | &check; | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Digital Realty<br/>Equinix<br/>IPC<br/>Internet2<br/>InterCloud<br/>IPC<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
+| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | &cross; | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Momentum Telecom<br/>Viasat<br/>Zayo |
+| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | &check; | Colt<br/>Equinix<br/>Intercloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>Swisscom<br/>Zayo |
+| **Zurich2** | [Equinix ZH5](https://www.equinix.com/data-centers/europe-colocation/switzerland-colocation/zurich-data-centers/zh5) | 1 | Switzerland North | &check; | Equinix |
++ ### National cloud environments
Azure national clouds are isolated from each other and from global commercial Az
| Location | Address | Local Azure regions | ER Direct | Service providers | |--|--|--|--|--|
-| **Atlanta** | [Equinix AT1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at1/) | n/a | Supported | Equinix |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond<br/>British Telecom<br/>Equinix<br/>Level 3 Communications<br/>Verizon |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix<br/>Internet2<br/>Megaport<br/>Verizon |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix<br/>CenturyLink Cloud Connect<br/>Verizon |
-| **Phoenix** | [CyrusOne Chandler](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | US Gov Arizona | Supported | AT&T NetBond<br/>CenturyLink Cloud Connect<br/>Megaport |
-| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect<br/>Megaport |
-| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T<br/>Equinix<br/>Level 3 Communications<br/>Verizon |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix<br/>Megaport |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East<br/>US Gov Virginia | Supported | AT&T NetBond<br/>CenturyLink Cloud Connect<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>Verizon |
+| **Atlanta** | [Equinix AT1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at1/) | &cross; | &check; | Equinix |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | &cross; | &check; | AT&T NetBond<br/>British Telecom<br/>Equinix<br/>Level 3 Communications<br/>Verizon |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | &cross; | &check; | Equinix<br/>Internet2<br/>Megaport<br/>Verizon |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | &cross; | &check; | Equinix<br/>CenturyLink Cloud Connect<br/>Verizon |
+| **Phoenix** | [CyrusOne Chandler](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | US Gov Arizona | &check; | AT&T NetBond<br/>CenturyLink Cloud Connect<br/>Megaport |
+| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | &check; | CenturyLink Cloud Connect<br/>Megaport |
+| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | &cross; | &check; | AT&T<br/>Equinix<br/>Level 3 Communications<br/>Verizon |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | &cross; | &check; | Equinix<br/>Megaport |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East<br/>US Gov Virginia | &check; | AT&T NetBond<br/>CenturyLink Cloud Connect<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>Verizon |
### China | Location | Address | Local Azure regions | ER Direct | Service providers | |--|--|--|--|--|
-| **Beijing** | China Telecom | n/a | Supported | China Telecom |
-| **Beijing2** | GDS | n/a | Supported | China Telecom<br/>China Mobile<br/>China Unicom<br/>GDS |
-| **Shanghai** | China Telecom | n/a | Supported | China Telecom |
-| **Shanghai2** | GDS | n/a | Supported | China Mobile<br/>China Telecom<br/>China Unicom<br/>GDS |
+| **Beijing** | China Telecom | &cross; | &check; | China Telecom |
+| **Beijing2** | GDS | &cross; | &check; | China Telecom<br/>China Mobile<br/>China Unicom<br/>GDS |
+| **Shanghai** | China Telecom | &cross; | &check; | China Telecom |
+| **Shanghai2** | GDS | &cross; | &check; | China Mobile<br/>China Telecom<br/>China Unicom<br/>GDS |
To learn more, see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/).
If you're remote and don't have fiber connectivity or want to explore other conn
## Connectivity through additional service providers
+#### [A-K](#tab/a-k)
+ | Location | Exchange | Connectivity providers | |--|--|--| | **Amsterdam** | Equinix<br/>Interxion<br/>Level 3 Communications | BICS<br/>CloudXpress<br/>Eurofiber<br/>Fastweb S.p.A<br/>Gulf Bridge International<br/>Kalaam Telecom Bahrain B.S.C<br/>MainOne<br/>Nianet<br/>POST Telecom Luxembourg<br/>Proximus<br/>RETN<br/>TDC Erhverv<br/>Telecom Italia Sparkle<br/>Telekom Deutschland GmbH<br/>Telia |
If you're remote and don't have fiber connectivity or want to explore other conn
| **Hamburg** | Equinix | Cinia | | **Hong Kong** | Equinix | Chief<br/>Macroview Telecom | | **Johannesburg** | Teraco | MTN |+
+#### [L-Q](#tab/l-q)
+
+| Location | Exchange | Connectivity providers |
+|--|--|--|
| **London** | BICS<br/>Equinix<br/>euNetworks | Bezeq International Ltd.<br/>CoreAzure<br/>Epsilon Telecommunications Limited<br/>Exponential E<br/>HSO<br/>NexGen Networks<br/>Proximus<br/>Tamares Telecom<br/>Zain | | **Los Angeles** | Equinix | Crown Castle<br/>Momentum Telecom<br/>Spectrum Enterprise<br/>Transtelco | | **Madrid** | Level3 | Zertia |
If you're remote and don't have fiber connectivity or want to explore other conn
| **New York** | Equinix<br/>Megaport | Altice Business<br/>Crown Castle<br/>Spectrum Enterprise<br/>Webair | | **Paris** | Equinix | Proximus | | **Quebec City** | Megaport | Fibrenoire |+
+#### [S-Z](#tab/s-z)
+
+| Location | Exchange | Connectivity providers |
+|--|--|--|
| **Sao Paulo** | Equinix | Venha Pra Nuvem | | **Seattle** | Equinix | Alaska Communications<br/>Momentum Telecom | | **Silicon Valley** | Coresite<br/>Equinix<br/>Megaport | Cox Business<br/>Momentum Telecom<br/>Spectrum Enterprise<br/>Windstream<br/>X2nsat Inc. |
If you're remote and don't have fiber connectivity or want to explore other conn
| **Toronto** | Equinix<br/>Megaport | Airgate Technologies Inc.<br/>Beanfield Metroconnect<br/>Aptum Technologies<br/>IVedha Inc<br/>Oncore Cloud Services Inc.<br/>Rogers<br/>Thinktel<br/>Zirro | | **Washington DC** | Equinix | Altice Business<br/>BICS<br/>Cox Business<br/>Crown Castle<br/>Gtt Communications Inc<br/>Epsilon Telecommunications Limited<br/>Masergy<br/>Momentum Telecom<br/>Windstream | ++ ## ExpressRoute system integrators Enabling private connectivity to fit your needs can be challenging, based on the scale of your network. You can work with any of the system integrators listed in the following table to assist you with onboarding to ExpressRoute.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The tables in this article provide information on ExpressRoute geographical cove
> ## Azure regions
-Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
+
+Azure regions are global datacenters where Azure compute, networking, and storage resources are hosted. When creating an Azure resource, you need to select the resource location, which determines the specific Azure datacenter (or availability zone) where the resource is deployed.
## ExpressRoute locations
-ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise Edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location doesn't need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
-You'll have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
+ExpressRoute locations, also known as peering locations or meet-me locations, are co-location facilities where Microsoft Enterprise Edge (MSEE) devices are situated. These locations serve as the entry points to Microsoft's network and are globally distributed, offering the ability to connect to Microsoft's network worldwide. ExpressRoute partners and ExpressRoute Direct user establish cross connections to Microsoft's network at these locations. Generally, the ExpressRoute location doesn't need to correspond with the Azure region. For instance, you can create an ExpressRoute circuit with the resource location in *East US* for the *Seattle* peering location.
+
+You have access to Azure services across all regions within a geopolitical region if you're connecting to at least one ExpressRoute location within the geopolitical region.
[!INCLUDE [expressroute-azure-regions-geopolitical-region](../../includes/expressroute-azure-regions-geopolitical-region.md)]
You'll have access to Azure services across all regions within a geopolitical re
The following table shows locations by service provider. If you want to view available providers by location, see [Service providers by location](expressroute-locations-providers.md). - ### Global commercial Azure
+#### [A-C](#tab/a-c)
+
+|Service provider | Microsoft Azure | Microsoft 365 | Locations |
+| | | | |
+| **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |&check; |&check; | Melbourne<br/>Sydney |
+| **[Airtel](https://www.airtel.in/business/#/)** | &check; | &check; | Chennai2<br/>Mumbai2<br/>Pune |
+| **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | &check; | &check; | Bangkok |
+| **[Aryaka Networks](https://www.aryaka.com/)** | &check; | &check; | Amsterdam<br/>Chicago<br/>Dallas<br/>Hong Kong<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC |
+| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** | &check; | &check; | Campinas<br/>Sao Paulo<br/>Sao Paulo2 |
+| **AT&T Connectivity Plus** | &check; | &check; | Dallas |
+| **AT&T Dynamic Exchange** | &check; | &check; | Chicago<br/>Dallas<br/>Los Angeles<br/>Miami<br/>Silicon Valley |
+| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** | &check; | &check; | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>London<br/>Phoenix<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
+| **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | &check; | &check; | Osaka<br/>Tokyo2 |
+| **[BBIX](https://www.bbix.net/en/service/ix/)** | &check; | &check; | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[BCX](https://www.bcx.co.za/solutions/connectivity/)** | &check; | &check; | Cape Town<br/>Johannesburg|
+| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** | &check; | &check; | Montreal<br/>Toronto<br/>Quebec City<br/>Vancouver |
+| **[Bezeq International](https://selfservice.bezeqint.net/web/guest/english/company-profile)** | &check; | &check; | London<br/>Tel Aviv |
+| **[BICS](https://www.bics.com/cloud-connect/)** | &check; | &check; | Amsterdam2<br/>London2 |
+| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai<br/>Newport(Wales)<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
+| **BSNL** | &check; | &check; | Chennai<br/>Mumbai |
+| **[C3ntro](https://www.c3ntro.com/)** | &check; | &check; | Miami |
+| **Cello** | &check; | &check; | Sydney |
+| **CDC** | &check; | &check; | Canberra<br/>Canberra2 |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** | &check; | &check; | Amsterdam2<br/>Chicago<br/>Dallas<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>Las Vegas<br/>London<br/>London2<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Toronto<br/>Washington DC<br/>Washington DC2 |
+| **[Chief Telecom](https://www.chief.com.tw/)** |&check; |&check; | Hong Kong<br/>Taipei |
+| **China Mobile International** |&check; |&check; | Hong Kong<br/>Hong Kong2<br/>Singapore |
+| **China Telecom Global** |&check; |&check; | Hong Kong<br/>Hong Kong2 |
+| **China Unicom Global** |&check; |&check; | Frankfurt<br/>Hong Kong<br/>Los Angeles<br/>Silicon Valley<br/>Singapore2<br/>Tokyo2 |
+| **Chunghwa Telecom** |&check; |&check; | Taipei |
+| **[Cinia](https://www.cinia.fi/)** |&check; |&check; | Amsterdam2<br/>Stockholm |
+| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | &check; | &check; | Queretaro<br/>Rio De Janeiro<br/>Santiago |
+| **Claro** |&check; |&check; | Miami |
+| **Cloudflare** |&check; |&check; | Los Angeles |
+| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |&check; |&check; | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
+| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |&check; |&check; | Miami |
+| **Cloudflare** |&check; |&check; | Los Angeles |
+| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** | &check; | &check; | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Berlin<br/>Chicago<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>London2<br/>Marseille<br/>Milan<br/>Munich<br/>Newport<br/>Osaka<br/>Paris<br/>Paris2<br/>Seoul<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Tokyo2<br/>Washington DC<br/>Zurich |
+| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** | &check; | &check; | Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** | &check; | &check; | Chicago<br/>Chicago2<br/>Denver<br/>Los Angeles<br/>New York<br/>Silicon Valley<br/>Silicon Valley2<br/>Washington DC<br/>Washington DC2 |
+| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** | &check; | &check; | Dallas<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
+| **Crown Castle** | &check; | &check; | Los Angeles2<br/>New York<br/>Washington DC |
+
+#### [D-I](#tab/d-i)
+
+|Service provider | Microsoft Azure | Microsoft 365 | Locations |
+| | | | |
+| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | &check; |&check; | Amsterdam2<br/>Chennai<br/>Chicago2<br/>Copenhagen<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Frankfurt2<br/>Kuala Lumpur<br/>Madrid<br/>Marseille<br/>Mumbai<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Phoenix<br/>Seattle<br/>Singapore2<br/>Tokyo2 |
+| **[Devoli](https://devoli.com/expressroute)** | &check; |&check; | Auckland<br/>Melbourne<br/>Sydney |
+| **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | &check; |&check; | Frankfurt |
+| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/solutions/public-cloud/azure-managed-cloud-services/cloud-connect-for-azure)** | &check; |&check; | Amsterdam<br/>Frankfurt2<br/>Hong Kong2 |
+| **[Digital Realty](https://www.digitalrealty.com/partners/microsoft-azure)** | &check; | &check; | Dallas2<br/>Seattle<br/>Silicon Valley<br/>Washington DC |
+| **du datamena** |&check; |&check; | Dubai2 |
+| **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |&check; |&check; | Dublin |
+| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | &check; | &check; | Hong Kong2<br/>London2<br/>Singapore<br/>Singapore2 |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Madrid2<br/>Melbourne<br/>Miami<br/>Milan<br/>Mumbai2<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Queretaro (Mexico)<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br>Zurich2</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **Etisalat UAE** |&check; |&check; | Dubai |
+| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London<br/>Paris |
+| **Exatel** |&check; |&check; | Warsaw |
+| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | &check; | &check; | Taipei |
+| **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | &check; |&check; | Milan |
+| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | &check; | &check; | Montreal<br/>Quebec City<br/>Toronto2 |
+| **[GBI](https://www.gbiinc.com/microsoft-azure/)** | &check; | &check; | Dubai2<br/>Frankfurt |
+| **[GÉANT](https://www.geant.org/Networks)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Marseille |
+| **[GlobalConnect](https://www.globalconnect.no/)** | &check; | &check; | Amsterdam<br/>Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm |
+| **[GlobalConnect DK](https://www.globalconnect.no/)** | &check; | &check; | Amsterdam |
+| **GTT** |&check; |&check; | Amsterdam<br/>Dallas<br/>Los Angeles2<br/>London2<br/>Singapore<br/>Sydney<br/>Washington DC |
+| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | &check;| &check; | Chennai<br/>Mumbai |
+| **[iAdvantage](https://www.scx.sunevision.com/)** | &check; | &check; | Hong Kong2 |
+| **Intelsat** | &check; | &check; | London2<br/>Washington DC2 |
+| **[InterCloud](https://www.intercloud.com/)** |&check; |&check; | Amsterdam<br/>Chicago<br/>Dallas<br/>Dublin2<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>Madrid<br/>Mumbai<br/>New York<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC<br/>Zurich |
+| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** | &check; | &check; | Chicago<br/>Dallas<br/>Silicon Valley<br/>Washington DC |
+| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** | &check; | &check; | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** | &check; | &check; | Cape Town<br/>Johannesburg<br/>London |
+| **[Interxion (Digital Realty)](https://www.digitalrealty.com/partners/microsoft-azure)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Copenhagen<br/>Dublin<br/>Dublin2<br/>Frankfurt<br/>London<br/>London2<br/>Madrid<br/>Marseille<br/>Paris<br/>Stockholm<br/>Zurich |
+| **IPC** | &check; |&check; | Washington DC |
+| **[IRIDEOS](https://irideos.it/)** | &check; | &check; | Milan |
+| **Iron Mountain** | &check; |&check; | Washington DC |
+| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**| &check; | &check; | Amsterdam<br/>London2<br/>Silicon Valley<br/>Tokyo2<br/>Toronto<br/>Washington DC |
+
+#### [J-M](#tab/j-m)
+ |Service provider | Microsoft Azure | Microsoft 365 | Locations | | | | | |
-| **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported | Melbourne<br/>Sydney |
-| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2<br/>Mumbai2<br/>Pune |
-| **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok |
-| **[Aryaka Networks](https://www.aryaka.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Hong Kong<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC |
-| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** | Supported | Supported | Campinas<br/>Sao Paulo<br/>Sao Paulo2 |
-| **AT&T Connectivity Plus** | Supported | Supported | Dallas |
-| **AT&T Dynamic Exchange** | Supported | Supported | Chicago<br/>Dallas<br/>Los Angeles<br/>Miami<br/>Silicon Valley |
-| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>London<br/>Phoenix<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
-| **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka<br/>Tokyo2 |
-| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
-| **[BCX](https://www.bcx.co.za/solutions/connectivity/)** | Supported | Supported | Cape Town<br/>Johannesburg|
-| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** | Supported | Supported | Montreal<br/>Toronto<br/>Quebec City<br/>Vancouver |
-| **[Bezeq International](https://selfservice.bezeqint.net/web/guest/english/company-profile)** | Supported | Supported | London<br/>Tel Aviv |
-| **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2<br/>London2 |
-| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai<br/>Newport(Wales)<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
-| **BSNL** | Supported | Supported | Chennai<br/>Mumbai |
-| **[C3ntro](https://www.c3ntro.com/)** | Supported | Supported | Miami |
-| **Cello** | Supported | Supported | Sydney |
-| **CDC** | Supported | Supported | Canberra<br/>Canberra2 |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** | Supported | Supported | Amsterdam2<br/>Chicago<br/>Dallas<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>Las Vegas<br/>London<br/>London2<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Toronto<br/>Washington DC<br/>Washington DC2 |
-| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong<br/>Taipei |
-| **China Mobile International** |Supported |Supported | Hong Kong<br/>Hong Kong2<br/>Singapore |
-| **China Telecom Global** |Supported |Supported | Hong Kong<br/>Hong Kong2 |
-| **China Unicom Global** |Supported |Supported | Frankfurt<br/>Hong Kong<br/>Los Angeles<br/>Silicon Valley<br/>Singapore2<br/>Tokyo2 |
-| **Chunghwa Telecom** |Supported |Supported | Taipei |
-| **[Cinia](https://www.cinia.fi/)** |Supported |Supported | Amsterdam2<br/>Stockholm |
-| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Queretaro<br/>Rio De Janeiro<br/>Santiago |
-| **Claro** |Supported |Supported | Miami |
-| **Cloudflare** |Supported |Supported | Los Angeles |
-| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
-| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami |
-| **Cloudflare** |Supported |Supported | Los Angeles |
-| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** | Supported | Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Berlin<br/>Chicago<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>London2<br/>Marseille<br/>Milan<br/>Munich<br/>Newport<br/>Osaka<br/>Paris<br/>Paris2<br/>Seoul<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Tokyo2<br/>Washington DC<br/>Zurich |
-| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** | Supported | Supported | Chicago<br/>Silicon Valley<br/>Washington DC |
-| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** | Supported | Supported | Chicago<br/>Chicago2<br/>Denver<br/>Los Angeles<br/>New York<br/>Silicon Valley<br/>Silicon Valley2<br/>Washington DC<br/>Washington DC2 |
-| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** | Supported | Supported | Dallas<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
-| **Crown Castle** | Supported | Supported | Los Angeles2<br/>New York<br/>Washington DC |
-| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | Supported |Supported | Amsterdam2<br/>Chennai<br/>Chicago2<br/>Copenhagen<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Frankfurt2<br/>Kuala Lumpur<br/>Madrid<br/>Marseille<br/>Mumbai<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Phoenix<br/>Seattle<br/>Singapore2<br/>Tokyo2 |
-| **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland<br/>Melbourne<br/>Sydney |
-| **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt |
-| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/solutions/public-cloud/azure-managed-cloud-services/cloud-connect-for-azure)** | Supported |Supported | Amsterdam<br/>Frankfurt2<br/>Hong Kong2 |
-| **[Digital Realty](https://www.digitalrealty.com/partners/microsoft-azure)** | Supported | Supported | Dallas2<br/>Seattle<br/>Silicon Valley<br/>Washington DC |
-| **du datamena** |Supported |Supported | Dubai2 |
-| **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin |
-| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2<br/>London2<br/>Singapore<br/>Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Madrid2<br/>Melbourne<br/>Miami<br/>Milan<br/>Mumbai2<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Queretaro (Mexico)<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br>Zurich2</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
-| **Etisalat UAE** |Supported |Supported | Dubai |
-| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London<br/>Paris |
-| **Exatel** |Supported |Supported | Warsaw |
-| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | Supported | Supported | Taipei |
-| **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | Supported |Supported | Milan |
-| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | Supported | Supported | Montreal<br/>Quebec City<br/>Toronto2 |
-| **[GBI](https://www.gbiinc.com/microsoft-azure/)** | Supported | Supported | Dubai2<br/>Frankfurt |
-| **[GÉANT](https://www.geant.org/Networks)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Marseille |
-| **[GlobalConnect](https://www.globalconnect.no/)** | Supported | Supported | Amsterdam<br/>Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm |
-| **[GlobalConnect DK](https://www.globalconnect.no/)** | Supported | Supported | Amsterdam |
-| **GTT** |Supported |Supported | Amsterdam<br/>Dallas<br/>Los Angeles2<br/>London2<br/>Singapore<br/>Sydney<br/>Washington DC |
-| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai<br/>Mumbai |
-| **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 |
-| **Intelsat** | Supported | Supported | London2<br/>Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Dublin2<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>Madrid<br/>Mumbai<br/>New York<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC<br/>Zurich |
-| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** | Supported | Supported | Chicago<br/>Dallas<br/>Silicon Valley<br/>Washington DC |
-| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
-| **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** | Supported | Supported | Cape Town<br/>Johannesburg<br/>London |
-| **[Interxion (Digital Realty)](https://www.digitalrealty.com/partners/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Copenhagen<br/>Dublin<br/>Dublin2<br/>Frankfurt<br/>London<br/>London2<br/>Madrid<br/>Marseille<br/>Paris<br/>Stockholm<br/>Zurich |
-| **IPC** | Supported |Supported | Washington DC |
-| **[IRIDEOS](https://irideos.it/)** | Supported | Supported | Milan |
-| **Iron Mountain** | Supported |Supported | Washington DC |
-| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**| Supported | Supported | Amsterdam<br/>London2<br/>Silicon Valley<br/>Tokyo2<br/>Toronto<br/>Washington DC |
-| **Jaguar Network** |Supported |Supported | Marseille<br/>Paris |
-| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** | Supported | Supported | London<br/>London2<br/>Newport(Wales) |
-| **KDDI** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
-| **[KINX](https://www.kinx.net/service/cloudhub/clouds/microsoft_azure_expressroute/?lang=en)** | Supported | Supported | Seoul |
-| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported | Supported | Auckland<br/>Sydney |
-| **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam<br/>Dublin2|
-| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul<br/>Seoul2 |
-| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>London<br/>Newport (Wales)<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
-| **LG CNS** | Supported | Supported | Busan<br/>Seoul |
-| **Lightpath** | Supported | Supported | New York<br/>Washington DC |
-| **[Lightstorm](https://polarin.lightstorm.net/)** | Supported | Supported | Chennai<br/>Dubai2<br/>Mumbai<br/>Pune<br/>Singapore2 |
-| **[Liquid Intelligent Technologies](https://liquidcloud.africa/connect/)** | Supported | Supported | Cape Town<br/>Johannesburg |
-| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul |
-| **[MCM Telecom](https://www.mcmtelecom.com/alianza-microsoft)** | Supported | Supported | Dallas<br/>Queretaro (Mexico)|
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Auckland<br/>Chicago<br/>Dallas<br/>Denver<br/>Dubai2<br/>Dublin<br/>Dublin2<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>Las Vegas<br/>London<br/>London2<br/>Los Angeles<br/>Madrid<br/>Melbourne<br/>Miami<br/>Minneapolis<br/>Montreal<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Paris<br/>Perth<br/>Phoenix<br/>Quebec City<br/>Queretaro (Mexico)<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stavanger<br/>Stockholm<br/>Sydney<br/>Sydney2<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich |
-| **[Momentum Telecom](https://gomomentum.com/)** | Supported | Supported | Atlanta<br/>Chicago<br/>Dallas<br/>Miami<br/>New York<br/>Silicon Valley<br/>Washington DC2 |
-| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Supported | Supported | London |
-| **MTN Global Connect** | Supported | Supported | Cape Town<br/>Johannesburg|
-| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** | Supported | Supported | Bangkok |
-| **NEC** | Supported | Supported | Tokyo3 |
-| **[NETSG](https://www.netsg.co/dc-cloud/cloud-and-dc-interconnect/)** | Supported | Supported | Melbourne<br/>Sydney2 |
-| **[Neutrona Networks](https://flo.net/)** | Supported | Supported | Dallas<br/>Los Angeles<br/>Miami<br/>Sao Paulo<br/>Washington DC |
-| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** | Supported | Supported | Newport(Wales) |
-| **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | Supported | Supported | Melbourne<br/>Perth<br/>Sydney<br/>Sydney2 |
-| **NL-IX** | Supported | Supported | Amsterdam2<br/>Dublin2 |
-| **[NOS](https://www.nos.pt/empresas/solucoes/cloud/cloud-publica/nos-cloud-connect)** | Supported | Supported | Amsterdam2<br/>Madrid |
-| **Noovle** | Supported | Supported | Milan |
-| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | Supported | Supported | Amsterdam<br/>Hong Kong<br/>London<br/>Los Angeles<br/>New York<br/>Osaka<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
-| **NTT Communications India Network Services Pvt Ltd** | Supported | Supported | Chennai<br/>Mumbai |
-| **[NTT Communications - Flexible InterConnect](https://sdpf.ntt.com/)** |Supported |Supported | Jakarta<br/>Osaka<br/>Singapore2<br/>Tokyo<br/>Tokyo2 |
-| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported | Tokyo |
-| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported | Amsterdam2<br/>Berlin<br/>Frankfurt<br/>London2 |
-| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka |
-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha<br/>Doha2<br/>London2<br/>Marseille |
-| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne<br/>Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
-| **[Orange Poland](https://www.orange.pl/duze-firmy/rozwiazania-chmurowe)** | Supported | Supported | Warsaw |
-| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 |
-| **Pacific Northwest Gigapop** | Supported | Supported | Seattle |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Atlanta<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC |
-| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
-| **PitChile** | Supported | Supported | Santiago<br/>Miami |
-| **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland |
-| **RedCLARA** | Supported | Supported | Sao Paulo |
-| **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | Supported | Supported | Mumbai |
-| **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan |
-| **RISQ** |Supported | Supported | Quebec City<br/>Montreal |
-| **SCSK** |Supported | Supported | Tokyo3 |
-| **[Sejong Telecom](https://www.sejongtelecom.net/)** | Supported | Supported | Seoul |
-| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported | Supported | London2<br/>Washington DC |
-| **[SIFY](https://sifytechnologies.com/)** | Supported | Supported | Chennai<br/>Mumbai2 |
-| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |
-| **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** | Supported | Supported | Seoul |
-| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
-| **[Sohonet](https://www.sohonet.com/product/fastlane/)** | Supported | Supported | Los Angeles<br/>London2 |
-| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | Supported | Supported | Auckland<br/>Sydney |
-| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva<br/>Zurich |
-| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
-| **[Telefonica](https://www.telefonica.com/es/)** | Supported | Supported | Amsterdam<br/>Dallas<br/>Frankfurt2<br/>Hong Kong<br/>Madrid<br/>Sao Paulo<br/>Singapore<br/>Washington DC |
-| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | Supported | Supported | London<br/>London2<br/>Singapore2 |
-| **Telenor** |Supported |Supported | Amsterdam<br/>London<br/>Oslo<br/>Stavanger |
-| **[Telia Carrier](https://www.arelion.com/products-and-services/internet-and-cloud/cloud-connect)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Oslo<br/>Paris<br/>Seattle<br/>Silicon Valley<br/>Stockholm<br/>Washington DC |
-| **[Telin](https://telin.net/)** | Supported | Supported | Jakarta |
-| **Telmex Uninet**| Supported | Supported | Dallas |
-| **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** | Supported | Supported | Canberra<br/>Melbourne<br/>Singapore<br/>Sydney |
-| **[Telus](https://www.telus.com)** | Supported | Supported | Montreal<br/>Quebec City<br/>Seattle<br/>Toronto<br/>Vancouver |
-| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** | Supported | Supported | Cape Town<br/>Johannesburg |
-| **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur |
-| **[Tivit](https://tivit.com/en/home-ingles/)** |Supported |Supported | Sao Paulo2 |
-| **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka<br/>Tokyo2 |
-| **TPG Telecom**| Supported | Supported | Melbourne<br/>Sydney |
-| **[Transtelco](https://transtelco.net/enterprise-services/)** | Supported | Supported | Dallas<br/>Queretaro(Mexico City)|
-| **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking)** |Supported |Supported | Chicago<br/>Silicon Valley<br/>Washington DC |
-| **[T-Mobile Poland](https://biznes.t-mobile.pl/pl/produkty-i-uslugi/sieci-teleinformatyczne/cloud-on-edge)** |Supported |Supported | Warsaw |
-| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported | Supported | Frankfurt |
-| **UOLDIVEO** | Supported | Supported | Sao Paulo |
-| **[UIH](https://www.uih.co.th/products-services/managed-services/cloud-direct/)** | Supported | Supported | Bangkok |
-| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Mumbai<br/>Paris<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
-| **[Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)** | Supported | Supported | Washington DC2 |
-| **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland<br/>Sydney |
-| **Vodacom** | Supported | Supported | Cape Town<br/>Johannesburg|
-| **[Vodafone](https://www.vodafone.com/business/products/cloud-and-edge)** | Supported | Supported | Amsterdam2<br/>Chicago<br/>Dallas<br/>Hong Kong2<br/>London<br/>London2<br/>Milan<br/>Silicon Valley<br/>Singapore |
-| **[Vi (Vodafone Idea)](https://www.myvi.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai<br/>Mumbai2 |
-| **Vodafone Qatar** | Supported | Supported | Doha |
-| **XL Axiata** | Supported | Supported | Jakarta |
-| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>London<br/>London2<br/>Los Angeles<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Toronto2<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich|
+| **Jaguar Network** |&check; |&check; | Marseille<br/>Paris |
+| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** | &check; | &check; | London<br/>London2<br/>Newport(Wales) |
+| **KDDI** | &check; | &check; | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[KINX](https://www.kinx.net/service/cloudhub/clouds/microsoft_azure_expressroute/?lang=en)** | &check; | &check; | Seoul |
+| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | &check; | &check; | Auckland<br/>Sydney |
+| **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | &check; | &check; | Amsterdam<br/>Dublin2|
+| **[KT](https://cloud.kt.com/)** | &check; | &check; | Seoul<br/>Seoul2 |
+| **[Level 3 Communications](https://www.lumen.com/en-us/edge-cloud/cloud-connect.html)** | &check; | &check; | Amsterdam<br/>Chicago<br/>Dallas<br/>London<br/>Newport (Wales)<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
+| **LG CNS** | &check; | &check; | Busan<br/>Seoul |
+| **Lightpath** | &check; | &check; | New York<br/>Washington DC |
+| **[Lightstorm](https://polarin.lightstorm.net/)** | &check; | &check; | Chennai<br/>Dubai2<br/>Mumbai<br/>Pune<br/>Singapore2 |
+| **[Liquid Intelligent Technologies](https://liquidcloud.africa/connect/)** | &check; | &check; | Cape Town<br/>Johannesburg |
+| **[LGUplus](http://www.uplus.co.kr/)** |&check; |&check; | Seoul |
+| **[MCM Telecom](https://www.mcmtelecom.com/alianza-microsoft)** | &check; | &check; | Dallas<br/>Queretaro (Mexico)|
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Auckland<br/>Chicago<br/>Dallas<br/>Denver<br/>Dubai2<br/>Dublin<br/>Dublin2<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>Las Vegas<br/>London<br/>London2<br/>Los Angeles<br/>Madrid<br/>Melbourne<br/>Miami<br/>Minneapolis<br/>Montreal<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Paris<br/>Perth<br/>Phoenix<br/>Quebec City<br/>Queretaro (Mexico)<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stavanger<br/>Stockholm<br/>Sydney<br/>Sydney2<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich |
+| **[Momentum Telecom](https://gomomentum.com/)** | &check; | &check; | Atlanta<br/>Chicago<br/>Dallas<br/>Miami<br/>New York<br/>Silicon Valley<br/>Washington DC2 |
+| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | &check; | &check; | London |
+| **MTN Global Connect** | &check; | &check; | Cape Town<br/>Johannesburg|
+
+#### [N-Q](#tab/n-q)
+
+|Service provider | Microsoft Azure | Microsoft 365 | Locations |
+| | | | |
+| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** | &check; | &check; | Bangkok |
+| **NEC** | &check; | &check; | Tokyo3 |
+| **[NETSG](https://www.netsg.co/dc-cloud/cloud-and-dc-interconnect/)** | &check; | &check; | Melbourne<br/>Sydney2 |
+| **[Neutrona Networks](https://flo.net/)** | &check; | &check; | Dallas<br/>Los Angeles<br/>Miami<br/>Sao Paulo<br/>Washington DC |
+| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** | &check; | &check; | Newport(Wales) |
+| **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | &check; | &check; | Melbourne<br/>Perth<br/>Sydney<br/>Sydney2 |
+| **NL-IX** | &check; | &check; | Amsterdam2<br/>Dublin2 |
+| **[NOS](https://www.nos.pt/empresas/solucoes/cloud/cloud-publica/nos-cloud-connect)** | &check; | &check; | Amsterdam2<br/>Madrid |
+| **Noovle** | &check; | &check; | Milan |
+| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | &check; | &check; | Amsterdam<br/>Hong Kong<br/>London<br/>Los Angeles<br/>New York<br/>Osaka<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
+| **NTT Communications India Network Services Pvt Ltd** | &check; | &check; | Chennai<br/>Mumbai |
+| **[NTT Communications - Flexible InterConnect](https://sdpf.ntt.com/)** |&check; |&check; | Jakarta<br/>Osaka<br/>Singapore2<br/>Tokyo<br/>Tokyo2 |
+| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |&check; |&check; | Tokyo |
+| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |&check; |&check; | Amsterdam2<br/>Berlin<br/>Frankfurt<br/>London2 |
+| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |&check; |&check; | Osaka |
+| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |&check; |&check; | Doha<br/>Doha2<br/>London2<br/>Marseille |
+| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |&check; |&check; | Melbourne<br/>Sydney |
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |&check; |&check; | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
+| **[Orange Poland](https://www.orange.pl/duze-firmy/rozwiazania-chmurowe)** | &check; | &check; | Warsaw |
+| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | &check; | &check; | Dubai2 |
+| **Pacific Northwest Gigapop** | &check; | &check; | Seattle |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | &check; | &check; | Amsterdam<br/>Atlanta<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC |
+| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | &check; | &check; | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
+| **PitChile** | &check; | &check; | Santiago<br/>Miami |
+
+#### [R-S](#tab/r-s)
+
+|Service provider | Microsoft Azure | Microsoft 365 | Locations |
+| | | | |
+| **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | &check; | &check; | Auckland |
+| **RedCLARA** | &check; | &check; | Sao Paulo |
+| **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | &check; | &check; | Mumbai |
+| **[Retelit](https://www.retelit.it/EN/Home.aspx)** | &check; | &check; | Milan |
+| **RISQ** |&check; | &check; | Quebec City<br/>Montreal |
+| **SCSK** |&check; | &check; | Tokyo3 |
+| **[Sejong Telecom](https://www.sejongtelecom.net/)** | &check; | &check; | Seoul |
+| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | &check; | &check; | London2<br/>Washington DC |
+| **[SIFY](https://sifytechnologies.com/)** | &check; | &check; | Chennai<br/>Mumbai2 |
+| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |&check; |&check; | Hong Kong2<br/>Singapore<br/>Singapore2 |
+| **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** | &check; | &check; | Seoul |
+| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |&check; |&check; | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[Sohonet](https://www.sohonet.com/product/fastlane/)** | &check; | &check; | Los Angeles<br/>London2 |
+| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | &check; | &check; | Auckland<br/>Sydney |
+| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | &check; | &check; | Geneva<br/>Zurich |
+
+#### [T-Z](#tab/t-z)
+
+|Service provider | Microsoft Azure | Microsoft 365 | Locations |
+| | | | |
+| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | &check; | &check; | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
+| **[Telefonica](https://www.telefonica.com/es/)** | &check; | &check; | Amsterdam<br/>Dallas<br/>Frankfurt2<br/>Hong Kong<br/>Madrid<br/>Sao Paulo<br/>Singapore<br/>Washington DC |
+| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | &check; | &check; | London<br/>London2<br/>Singapore2 |
+| **Telenor** |&check; |&check; | Amsterdam<br/>London<br/>Oslo<br/>Stavanger |
+| **[Telia Carrier](https://www.arelion.com/products-and-services/internet-and-cloud/cloud-connect)** | &check; | &check; | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Oslo<br/>Paris<br/>Seattle<br/>Silicon Valley<br/>Stockholm<br/>Washington DC |
+| **[Telin](https://telin.net/)** | &check; | &check; | Jakarta |
+| **Telmex Uninet**| &check; | &check; | Dallas |
+| **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** | &check; | &check; | Canberra<br/>Melbourne<br/>Singapore<br/>Sydney |
+| **[Telus](https://www.telus.com)** | &check; | &check; | Montreal<br/>Quebec City<br/>Seattle<br/>Toronto<br/>Vancouver |
+| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** | &check; | &check; | Cape Town<br/>Johannesburg |
+| **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | &check; | &check; | Kuala Lumpur |
+| **[Tivit](https://tivit.com/en/home-ingles/)** |&check; |&check; | Sao Paulo2 |
+| **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | &check; | &check; | Osaka<br/>Tokyo2 |
+| **TPG Telecom**| &check; | &check; | Melbourne<br/>Sydney |
+| **[Transtelco](https://transtelco.net/enterprise-services/)** | &check; | &check; | Dallas<br/>Queretaro(Mexico City)|
+| **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking)** |&check; |&check; | Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[T-Mobile Poland](https://biznes.t-mobile.pl/pl/produkty-i-uslugi/sieci-teleinformatyczne/cloud-on-edge)** |&check; |&check; | Warsaw |
+| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | &check; | &check; | Frankfurt |
+| **UOLDIVEO** | &check; | &check; | Sao Paulo |
+| **[UIH](https://www.uih.co.th/products-services/managed-services/cloud-direct/)** | &check; | &check; | Bangkok |
+| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** | &check; | &check; | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Mumbai<br/>Paris<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
+| **[Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)** | &check; | &check; | Washington DC2 |
+| **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | &check; | &check; | Auckland<br/>Sydney |
+| **Vodacom** | &check; | &check; | Cape Town<br/>Johannesburg|
+| **[Vodafone](https://www.vodafone.com/business/products/cloud-and-edge)** | &check; | &check; | Amsterdam2<br/>Chicago<br/>Dallas<br/>Hong Kong2<br/>London<br/>London2<br/>Milan<br/>Silicon Valley<br/>Singapore |
+| **[Vi (Vodafone Idea)](https://www.myvi.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | &check; | &check; | Chennai<br/>Mumbai2 |
+| **Vodafone Qatar** | &check; | &check; | Doha |
+| **XL Axiata** | &check; | &check; | Jakarta |
+| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** | &check; | &check; | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>London<br/>London2<br/>Los Angeles<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Toronto2<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich|
++ ### National cloud environment
-Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud can't connect to the Azure regions in the others.
+Azure national clouds are isolated from each other and from the Azure public cloud. ExpressRoute for one Azure cloud can't connect to the Azure regions in the others.
-### US Government cloud
+#### [US Government cloud](#tab/us-government-cloud)
| Service provider | Microsoft Azure | Office 365 | Locations | | | | | |
-| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Chicago<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |New York<br/>Phoenix<br/>San Antonio<br/>Washington DC |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Atlanta<br/>Chicago<br/>Dallas<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Washington DC |
-| **[Internet2](https://internet2.edu/services/microsoft-azure-expressroute/)** |Supported |Supported |Dallas |
-| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported |Chicago<br/>Silicon Valley<br/>Washington DC |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported | Supported | Chicago<br/>Dallas<br/>San Antonio<br/>Seattle<br/>Washington DC |
-| **[Verizon](http://news.verizonenterprise.com/2014/04/secure-cloud-interconnect-solutions-enterprise/)** |Supported |Supported |Chicago<br/>Dallas<br/>New York<br/>Silicon Valley<br/>Washington DC |
+| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |&check; |&check; |Chicago<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |&check; |&check; |New York<br/>Phoenix<br/>San Antonio<br/>Washington DC |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |&check; |&check; |Atlanta<br/>Chicago<br/>Dallas<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Washington DC |
+| **[Internet2](https://internet2.edu/services/microsoft-azure-expressroute/)** |&check; |&check; |Dallas |
+| **[Level 3 Communications](https://www.lumen.com/en-us/edge-cloud/cloud-connect.html)** |&check; |&check; |Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |&check; | &check; | Chicago<br/>Dallas<br/>San Antonio<br/>Seattle<br/>Washington DC |
+| **[Verizon](http://news.verizonenterprise.com/2014/04/secure-cloud-interconnect-solutions-enterprise/)** |&check; |&check; |Chicago<br/>Dallas<br/>New York<br/>Silicon Valley<br/>Washington DC |
-### China
+#### [China cloud](#tab/china-cloud)
| Service provider | Microsoft Azure | Office 365 | Locations | | | | | |
-| **China Telecom** |Supported |Not Supported |Beijing<br/>Beijing2<br/>Shanghai<br/>Shanghai2 |
-| **China Mobile** | Supported | Not Supported | Beijing2<br/>Shanghai2 |
-| **China Unicom** | Supported | Not Supported | Beijing2<br/>Shanghai2 |
-| **[GDS](http://www.gds-services.com/en/about_2.html)** |Supported |Not Supported |Beijing2<br/>Shanghai2 |
+| **China Telecom** |&check; |&cross; |Beijing<br/>Beijing2<br/>Shanghai<br/>Shanghai2 |
+| **China Mobile** | &check; | &cross; | Beijing2<br/>Shanghai2 |
+| **China Unicom** | &check; | &cross; | Beijing2<br/>Shanghai2 |
+| **[GDS](http://www.gds-services.com/en/about_2.html)** |&check; |&cross; |Beijing2<br/>Shanghai2 |
++
-To learn more<br/>see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/).
+To learn more, see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/).
## Connectivity through Exchange providers If your connectivity provider isn't listed in previous sections, you can still create a connection. * Check with your connectivity provider to see if they're connected to any of the exchanges in the table above. You can check the following links to gather more information about services offered by exchange providers. Several connectivity providers are already connected to Ethernet exchanges.+ * [Cologix](https://www.cologix.com/) * [CoreSite](https://www.coresite.com/) * [DE-CIX](https://www.de-cix.net/en/services/microsoft-azure-peering-service)
If your connectivity provider isn't listed in previous sections, you can still c
* Have your connectivity provider extend your network to the peering location of choice. * Ensure that your connectivity provider extends your connectivity in a highly available manner so that there are no single points of failure.+ * Order an ExpressRoute circuit with the exchange as your connectivity provider to connect to Microsoft. * Follow steps in [Create an ExpressRoute circuit](expressroute-howto-circuit-classic.md) to set up connectivity. ## Connectivity through satellite operators+ If you're remote and don't have fiber connectivity, or you want to explore other connectivity options you can check the following satellite operators. * Intelsat
If you're remote and don't have fiber connectivity, or you want to explore other
## Connectivity through additional service providers
+#### [A-C](#tab/a-C)
+ | Connectivity provider | Exchange | Locations | | | | | | **[1CLOUDSTAR](https://www.1cloudstar.com/services/cloudconnect-azure-expressroute.html)** | Equinix |Singapore |
If you're remote and don't have fiber connectivity, or you want to explore other
| **[CoreAzure](https://www.coreazure.com/)**| Equinix | London | | **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas<br/>Silicon Valley<br/>Washington DC | | **[Crown Castle](https://fiber.crowncastle.com/solutions/added/cloud-connect)**| Equinix | Atlanta<br/>Chicago<br/>Dallas<br/>Los Angeles<br/>New York<br/>Washington DC |+
+#### [D-M](#tab/d-m)
+
+| Connectivity provider | Exchange | Locations |
+| | | |
| **[Data Foundry](https://www.datafoundry.com/services/cloud-connect)** | Megaport | Dallas | | **[Epsilon Telecommunications Limited](https://www.epsilontel.com/solutions/cloud-connect/)** | Equinix | London<br/>Singapore<br/>Washington DC | | **[Eurofiber](https://eurofiber.nl/microsoft-azure/)** | Equinix | Amsterdam |
If you're remote and don't have fiber connectivity, or you want to explore other
| **[Masergy](https://www.masergy.com/sd-wan/multi-cloud-connectivity)** | Equinix | Washington DC | | **[Momentum Telecom](https://gomomentum.com/)** | Equinix<br/>Megaport | Atlanta<br/>Los Angeles<br/>Seattle<br/>Washington DC | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Teraco | Cape Town<br/>Johannesburg |+
+#### [N-Z](#tab/n-z)
+
+| Connectivity provider | Exchange | Locations |
+| | | |
| **[NexGen Networks](https://www.nexgen-net.com/nexgen-networks-direct-connect-microsoft-azure-expressroute.html)** | Interxion | London | | **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam<br/>Frankfurt | | **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Montreal<br/>Toronto |
If you're remote and don't have fiber connectivity, or you want to explore other
| **[Zertia](https://www.zertia.es)**| Level 3 | Madrid | | **Zirro**| Cologix<br/>Equinix | Montreal<br/>Toronto | ++ ## Connectivity through datacenter providers | Provider | Exchange |
If you're remote and don't have fiber connectivity, or you want to explore other
| **SINET**| | **Surfnet, through GÉANT**|
-* If your connectivity provider isn't listed here, check to see if they're connected to any of the ExpressRoute Exchange Partners listed above.
+> [!NOTE]
+> If your connectivity provider isn't listed here, you can verify if they are connected to any of the other ExpressRoute Exchange Partners mentioned previously.
## ExpressRoute system integrators
-Enabling private connectivity to fit your needs can be challenging, based on the scale of your network. You can work with any of the system integrators listed in the following table to assist you with onboarding to ExpressRoute.
+
+Enabling private connectivity to meet your needs can be challenging, depending on the scale of your network. You can collaborate with any of the system integrators listed in the following table to assist with onboarding to ExpressRoute.
| System integrator | Continent | | | | | **[Altogee](https://altogee.be/diensten/express-route/)** | Europe | | **[Avanade Inc.](https://www.avanade.com/)** | Asia<br/>Europe<br/>North America<br/>South America |
-| **[Bright Skies GmbH](https://www.rackspace.com/bright-skies)** | Europe
-| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Australia
+| **[Bright Skies GmbH](https://www.rackspace.com/bright-skies)** | Europe |
+| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Australia |
| **[Equinix Professional Services](https://www.equinix.com/services/consulting/)** | North America | | **[New Era](https://www.neweratech.com/us/)** | North America | | **[Lightstream](https://www.lightstream.tech/partners/microsoft-azure/)** | North America |
Enabling private connectivity to fit your needs can be challenging, based on the
| **[Vigilant.IT](https://vigilant.it/networking-services/microsoft-azure-networking/)** | Australia | ## Next steps+ * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
-* Ensure that all prerequisites are met. See [ExpressRoute prerequisites](expressroute-prerequisites.md).
+* Ensure that all prerequisites are met. For more information, see [ExpressRoute prerequisites](expressroute-prerequisites.md).
<!--Image References--> [0]: ./media/expressroute-locations/expressroute-locations-map.png "Location map"
frontdoor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/best-practices.md
This article summarizes best practices for using Azure Front Door.
## General best practices
-### Avoid combining Traffic Manager and Front Door
+### Understanding when to combine Traffic Manager and Front Door
For most solutions, we recommend the use *either* Front Door *or* [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md), but not both. Azure Traffic Manager is a DNS-based load balancer. It sends traffic directly to your origin's endpoints. In contrast, Azure Front Door terminates connections at points of presence (PoPs) near to the client and establishes separate long-lived connections to the origins. The products work differently and are intended for different use cases.
governance Effect Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effect-modify.md
# Azure Policy definitions modify effect
-The `modify` effect is used to add, update, or remove properties or tags on a subscription or resource during creation or update. A common example is updating tags on resources such as costCenter. Existing non-compliant resources can be remediated with a [remediation task](../how-to/remediate-resources.md). A single Modify rule can have any number of operations. Policy assignments with effect set as Modify require a [managed identity](../how-to/remediate-resources.md) to do remediation.
+The `modify` effect is used to add, update, or remove properties or tags on a subscription or resource during creation or update. Existing non-compliant resources can also be remediated with a [remediation task](../how-to/remediate-resources.md). Policy assignments with effect set as Modify require a [managed identity](../how-to/remediate-resources.md) to do remediation. A common example using `modify` effect is updating tags on resources such as 'costCenter'.
-The `modify` effect supports the following operations:
+There are some nuances in modification behavior for resource properties. Learn more about scenarios when modification is [skipped](#skipped-modification).
+
+A single `modify` rule can have any number of operations. Supported operations are:
- _Add_, _replace_, or _remove_ resource tags. Only tags can be removed. For tags, a Modify policy should have [mode](./definition-structure.md#resource-manager-modes) set to `indexed` unless the target resource is a resource group. - _Add_ or _replace_ the value of managed identity type (`identity.type`) of virtual machines and Virtual Machine Scale Sets. You can only modify the `identity.type` for virtual machines or Virtual Machine Scale Sets.
If either of these checks fail, the policy evaluation falls back to the specifie
> same alias behaves differently between API versions, conditional modify operations can be used to > determine the `modify` operation used for each API version.
+### Skipped modification
There are some cases when modify operations are skipped during evaluation:-- When the condition of an operation in the `operations` array is evaluated to _false_, that particular operation is skipped.-- If an alias specified for an operation isn't modifiable in the request's API version, then evaluation uses the conflict effect. If the conflict effect is set to _deny_, the request is blocked. If the conflict effect is set to _audit_, the request is allowed through but the modify operation is skipped.-- In some cases, modifiable properties are nested within other properties and have an alias like `Microsoft.Storage/storageAccounts/blobServices/deleteRetentionPolicy.enabled`. If the "parent" property, in this case `deleteRetentionPolicy`, isn't present in the request, modification is skipped because that property is assumed to be omitted intentionally.-- When a modify operation attempts to add or replace the `identity.type` field on a resource other than a Virtual Machine or Virtual Machine Scale Set, policy evaluation is skipped altogether so the modification isn't performed. In this case, the resource is considered not [applicable](../concepts/policy-applicability.md) to the policy.
+- **Existing resources:** When a policy definition using the `modify` effect is run as part of an evaluation cycle, it doesn't make changes to resources that already exist. Instead, it marks any resource that meets the `if` condition as non-compliant, so they can be remediated through a remediation task.
+- **Not applicable:** When the condition of an operation in the `operations` array is evaluated to _false_, that particular operation is skipped.
+- **Property not modifiable:** If an alias specified for an operation isn't modifiable in the request's API version, then evaluation uses the conflict effect. If the conflict effect is set to _deny_, the request is blocked. If the conflict effect is set to _audit_, the request is allowed through but the `modify` operation is skipped.
+- **Property not present:** If a property is not present in the resource payload of the request, then the modification may be skipped. In some cases, modifiable properties are nested within other properties and have an alias like `Microsoft.Storage/storageAccounts/blobServices/deleteRetentionPolicy.enabled`. If the "parent" property, in this case `deleteRetentionPolicy`, isn't present in the request, modification is skipped because that property is assumed to be omitted intentionally. For a practical example, go to section [Example of property not present](#example-of-property-not-present).
+- **Non VM or VMSS identity operation:** When a modify operation attempts to add or replace the `identity.type` field on a resource other than a Virtual Machine or Virtual Machine Scale Set, policy evaluation is skipped altogether so the modification isn't performed. In this case, the resource is considered not [applicable](../concepts/policy-applicability.md) to the policy.
+
+#### Example of property not present
+
+Modification of resource properties depends on the API request and the updated resource payload. The payload can depend on client used, such as Azure portal, and other factors like resource provider.
+
+Imagine you apply a policy that modifies tags on a virtual machine (VM). Every time the VM is updated, such as during resizing or disk changes, the tags are updated accordingly regardless of the contents of the VM payload. This is because tags are independent of the VM properties.
-When a policy definition using the `modify` effect is run as part of an evaluation cycle, it doesn't make changes to resources that already exist. Instead, it marks any resource that meets the `if` condition as non-compliant.
+However, if you apply a policy that modifies properties on a VM, modification is dependent on the resource payload. If you attempt to modify properties that are not included in the update payload, the modification will not take place. For instance, this can happen when patching the `assessmentMode` property of a VM (alias `Microsoft.Compute/virtualMachines/osProfile.windowsConfiguration.patchSettings.assessmentMode`). The property is "nested", so if its parent properties are not included in the request, this omission is assumed to be intentional and modification is skipped. For modification to take place, the resource payload should contain this context.
## Modify properties
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
As mentioned above, Microsoft recommends that HDInsight clusters be regularly mi
* **Multiple workloads on the same cluster**. In HDInsight 4.0, the Hive Warehouse Connector needs separate clusters for Spark and Interactive Query workloads. [Follow these steps to set up both clusters in Azure HDInsight](interactive-query/apache-hive-warehouse-connector.md). Similarly, integrating [Spark with HBASE](hdinsight-using-spark-query-hbase.md) requires two different clusters. * **Custom Ambari DB password changed**. The Ambari DB password is set during cluster creation and there's no current mechanism to update it. If a customer deploys the cluster with a [custom Ambari DB](hdinsight-custom-ambari-db.md), they have the ability to change the DB password on the SQL DB; however, there's no way to update this password for a running HDInsight cluster. * **Modifying HDInsight Load Balancers**. The HDInsight load balancers that are automatically deployed for Ambari and SSH access **should not** be modified or deleted. If you modify the HDInsight load balancer(s) and it breaks the cluster functionality, you will be advised to redeploy the cluster.
+ * **Reusing Ranger 4.X Databases in 5.X**. HDInsight 5.1 has [Apache Ranger version 2.3.0](https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+2.3.0+-+Release+Notes) which is major version upgrade from 1.2.0 in HDInsight 4.X clusters. Reuse of an HDInsight 4.X Ranger database in HDInsight 5.1 would prevent the Ranger service from starting due to differences in the DB schema. You would need to create an empty Ranger database to successfully deploy HDInsight 5.1 ESP clusters.
## Next steps
hdinsight Llap Schedule Based Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md
If there are running jobs while scale-down is triggered, then we can expect one
- Query completes successfully without any impact.
-> [!NOTE]
-> It is recommended to plan approprite down time with the users during the scale down schedules.
+> [!NOTE]
+> It is recommended to plan appropriate down time with the users during the scale down schedules.
<b>2. What happens to the running Spark jobs when using Hive Warehouse Connector to execute queries in the LLAP Cluster with Auto scale enabled?</b>
healthcare-apis Manage Access Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/manage-access-rbac.md
Keep in mind the following points about Azure role assignments with the de-ident
- When the de-identification service is locked with an [Azure Resource Manager read-only lock](/azure/azure-resource-manager/management/lock-resources), the lock prevents the assignment of Azure roles that are scoped to the de-identification service. - When Azure deny assignments have been applied, your access might be blocked even if you have a role assignment. For more information, see [Understand Azure deny assignments](/azure/role-based-access-control/deny-assignments).
-You can use different tools to assign built-in roles.
+You can use different tools to assign built-in roles. Select the tab that applies for details.
# [Azure portal](#tab/azure-portal)
To assign an Azure role to a security principal with PowerShell, call the [New-A
The format of the command can differ based on the scope of the assignment, but `ObjectId` and `RoleDefinitionName` are required parameters. While the `Scope` parameter is optional, you should set it to retain the principle of least privilege. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised.
-The scope for a de-identification service (preview) is in the form `/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>`
+The scope for a de-identification service (preview) is in the form `/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<De-identification Service Name>`
The example assigns the **DeID Data Owner** built-in role to a user, scoped to a specific de-identification service. Make sure to replace the placeholder values in angle brackets `<>` with your own values:
in angle brackets `<>` with your own values:
New-AzRoleAssignment -SignInName <Email> ` -RoleDefinitionName "DeID Data Owner" `
- -Scope "/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>"
+ -Scope "/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<De-identification Service Name>"
``` A successful response should look like: ```- console
-RoleAssignmentId : /subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>/providers/Microsoft.Authorization/roleAssignments/<Role Assignment ID>
-Scope : /subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>
+RoleAssignmentId : /subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<De-identification Service Name>/providers/Microsoft.Authorization/roleAssignments/<Role Assignment ID>
+Scope : /subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<De-identification Service Name>
DisplayName : Mark Patrick SignInName : markpdaniels@contoso.com RoleDefinitionName : DeID Data Owner
RoleDefinitionId : <Role Definition ID>
ObjectId : <Object ID> ObjectType : User CanDelegate : False- ``` For more information, see [Assign Azure roles using Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell).
To assign an Azure role to a security principal with Azure CLI, use the [az role
The format of the command can differ based on the type of security principal, but `role` and `scope` are required parameters.
-The scope for a de-identification service (preview) is in the form `/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>`
+The scope for a de-identification service (preview) is in the form `/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<De-identification Service Name>`
The following example assigns the **DeID Data Owner** built-in role to a user, scoped to a specific de-identification service. Make sure to replace the placeholder values in angle brackets `<>` with your own values:
The following example assigns the **DeID Data Owner** built-in role to a user, s
az role assignment create \ --assignee <Email> \ --role "DeID Data Owner" \
- --scope "/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<Deidentification Service Name>"
+ --scope "/subscriptions/<Subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.HealthDataAIServices/deidServices/<De-identification Service Name>"
``` For more information, see [Assign Azure roles using Azure PowerShell](/azure/role-based-access-control/role-assignments-cli).
healthcare-apis Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/managed-identities.md
Last updated 07/17/2024
# Use managed identities with the de-identification service (preview)
-Managed identities provide Azure services with a secure, automatically managed identity in Microsoft Entra ID. Using managed identities eliminates the need for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. The de-identification service supports both.
+Managed identities provide Azure services with a secure, automatically managed identity in Microsoft Entra ID. Using managed identities eliminates the need for developers to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. The de-identification service supports both.
Managed identities can be used to grant the de-identification service (preview) access to your storage account for batch processing. In this article, you learn how to assign a managed identity to your de-identification service. ## Prerequisites -- Understand the differences between **system-assigned** and **user-assigned** described in [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
+- Understand the differences between **system-assigned** and **user-assigned** managed identities, described in [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
- A de-identification service (preview) in your Azure subscription. If you don't have a de-identification service, follow the steps in [Quickstart: Deploy the de-identification service](quickstart.md). ## Create an instance of the de-identification service (preview) in Azure Health Data Services with a system-assigned managed identity
the resource definition, replacing **resource-id** with the Azure Resource Manag
## Supported scenarios using managed identities
-Managed identities assigned to the de-identification service (preview) can be used to allow access to Azure Blob Storage for batch de-identification jobs. The service acquires a token as
-the managed identity to access Blob Storage and de-identify blobs that match a specified pattern. For more information, including how to grant access to your managed identity,
-see [Quickstart: Azure Health De-identification client library for .NET](quickstart-sdk-net.md).
+Managed identities assigned to the de-identification service (preview) can be used to allow access to Azure Blob Storage for batch de-identification jobs. The service acquires a token as the managed identity to access Blob Storage, and de-identify blobs that match a specified pattern. For more information, including how to grant access to your managed identity, see [Quickstart: Azure Health De-identification client library for .NET](quickstart-sdk-net.md).
## Clean-up steps When you remove a system-assigned identity, you delete it from Microsoft Entra ID. System-assigned identities are also automatically removed from Microsoft Entra ID
-when you delete the de-identification service (preview).
+when you delete the de-identification service (preview), described as follows.
# [Azure portal](#tab/portal)
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/overview.md
Title: Overview of the de-identification service (preview) in Azure Health Data Services
-description: Learn how the de-identification service (preview) in Azure Health Data Services anonymizes clinical data, ensuring HIPAA compliance while retaining data relevance for research and analytics.
+ Title: Overview of the De-identification service (preview) in Azure Health Data Services
+description: Learn how the De-identification service (preview) in Azure Health Data Services anonymizes clinical data, ensuring HIPAA compliance while retaining data relevance for research and analytics.
# What is the de-identification service (preview)?
-The de-identification service (preview) in Azure Health Data Services enables healthcare organizations to anonymize clinical data so that the resulting data retains its clinical relevance and distribution while also adhering to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule. The service uses state-of-the-art machine learning models to automatically extract, redact, or surrogate 28 entities, including the HIPAA 18 Protected Health Information (PHI) identifiers ΓÇô from unstructured text such as clinical notes, transcripts, messages, or clinical trial studies.
+The de-identification service (preview) in Azure Health Data Services enables healthcare organizations to anonymize clinical data so that the resulting data retains its clinical relevance and distribution while also adhering to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule. The service uses state-of-the-art machine learning models to automatically extract, redact, or surrogate 28 entities - including the HIPAA 18 Protected Health Information (PHI) identifiers ΓÇô from unstructured text such as clinical notes, transcripts, messages, or clinical trial studies.
## Use de-identified data in research, analytics, and machine learning
healthcare-apis Quickstart Sdk Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart-sdk-net.md
Title: "Quickstart: Azure Health De-identification client library for .NET"
+ Title: "Quickstart: Azure Health de-identification client library for .NET"
description: A quickstart guide to de-identify health data with the .NET client library
A de-identification service (preview) provides you with an endpoint URL. This en
az resource create -g $RESOURCE_GROUP_NAME -n $DEID_SERVICE_NAME --resource-type microsoft.healthdataaiservices/deidservices --is-full-object -p "{\"identity\":{\"type\":\"SystemAssigned\"},\"properties\":{},\"location\":\"$REGION\"}" ```
-### Create an Azure Storage Account
+### Create an Azure Storage account
1. Install [Azure CLI](/cli/azure/install-azure-cli) 1. Create an Azure Storage Account
A de-identification service (preview) provides you with an endpoint URL. This en
az storage account create --name $STORAGE_ACCOUNT_NAME --resource-group $RESOURCE_GROUP_NAME --location $REGION ```
-### Authorize de-identification service (preview) on storage account
+### Authorize de-identification service (preview) on the Azure Storage account
- Give the de-identification service (preview) access to your storage account
The client library is available through NuGet, as the `Azure.Health.Deidentifica
## Code examples-- [Create a Deidentification Client](#create-a-deidentification-client)
+- [Create a de-identification Client](#create-a-de-identification-client)
- [De-identify a string](#de-identify-a-string) - [Tag a string](#tag-a-string)-- [Create a Deidentification Job](#create-a-deidentification-job)-- [Get the status of a Deidentification Job](#get-the-status-of-a-deidentification-job)
+- [Create a de-identification Job](#create-a-de-identification-job)
+- [Get the status of a de-identification Job](#get-the-status-of-a-de-identification-job)
-### Create a Deidentification Client
+### Create a de-identification client
-Before you can create the client, you need to find your **deidentification service (preview) endpoint URL**.
+Before you can create the client, you need to find your **de-identification service (preview) endpoint URL**.
You can find the endpoint URL with the Azure CLI:
content.Operation = OperationType.Tag;
DeidentificationResult result = await client.DeidentifyAsync(content); ```
-### Create a Deidentification Job
+### Create a de-identification job
This function allows you to de-identify all files, filtered via prefix, within an Azure Blob Storage Account.
DeidentificationJob job = new(
job = client.CreateJob(WaitUntil.Started, "my-job-1", job).Value; ```
-### Get the status of a Deidentification Job
+### Get the status of a de-identification job
Once a job is created, you can view the status and other details of the job.
dotnet run
## Clean up resources
-### Delete Deidentification Service
+### Delete de-identification service
```bash az resource delete -n $DEID_SERVICE_NAME -g $RESOURCE_GROUP_NAME --resource-type microsoft.healthdataaiservices/deidservices ```
-### Delete Azure Storage Account
+### Delete Azure Storage account
```bash az resource show -n $STORAGE_ACCOUNT_NAME -g $RESOURCE_GROUP_NAME --resource-type Microsoft.Storage/storageAccounts ```
-### Delete Role Assignment
+### Delete role assignment
```bash az role assignment delete --assignee $DEID_SERVICE_PRINCIPAL_ID --role "Storage Blob Data Contributor" --scope $STORAGE_ACCOUNT_ID
az role assignment delete --assignee $DEID_SERVICE_PRINCIPAL_ID --role "Storage
### Unable to access source or target storage
-Ensure the permissions are given and the Managed Identity for the de-identification service (preview) is set up properly.
+Ensure the permissions are given, and the Managed Identity for the de-identification service (preview) is set up properly.
-See [Authorize Deidentification Service on Storage Account](#authorize-de-identification-service-preview-on-storage-account)
+See [Authorize de-identification service (preview) on the Azure Storage account](#authorize-de-identification-service-preview-on-the-azure-storage-account)
### Job failed with status PartialFailed
See [Sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/healthdata
In this quickstart, you learned: - How to create a de-identification service (preview) and assign a role on a storage account.-- How to create a Deidentification Client
+- How to create a de-identification client
- How to de-identify strings and create jobs on documents within a storage account. > [!div class="nextstepaction"]
healthcare-apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart.md
For more information, see [Use tags to organize your Azure resources](/azure/azu
In the **Managed Identity** tab, you can assign a managed identity to your de-identification service (preview). For more information, see [managed identities](managed-identities.md). 1. To create a system-assigned managed identity, select **On** under **Status**.
-1. To add a user-assigned managed identity, select **Add** to use the selection pane to choose an existing identity to assign.
+1. To add a user-assigned managed identity, select **Add** to use the selection pane to assign an existing identity.
## Review and create
iot-operations Howto Manage Assets Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-manage-assets-remotely.md
Now you can define the tags associated with the asset. To add OPC UA tags:
| Node ID | Tag name | Observability mode | | - | -- | |
- | ns=3;s=FastUInt10 | temperature | none |
- | ns=3;s=FastUInt100 | Tag 10 | none |
+ | ns=3;s=FastUInt10 | temperature | None |
+ | ns=3;s=FastUInt100 | Tag 10 | None |
1. Select **Manage default settings** to configure default telemetry settings for the asset. These settings apply to all the OPC UA tags that belong to the asset. You can override these settings for each tag that you add. Default telemetry settings include:
You can import up to 1000 OPC UA tags at a time from a CSV file:
1. Create a CSV file that looks like the following example:
- | NodeID | TagName | Sampling Interval Milliseconds | QueueSize | ObservabilityMode |
- ||-|--|--|-|
- | ns=3;s=FastUInt1000 | Tag 1000 | 1000 | 5 | none |
- | ns=3;s=FastUInt1001 | Tag 1001 | 1000 | 5 | none |
- | ns=3;s=FastUInt1002 | Tag 1002 | 5000 | 10 | none |
+ | NodeID | TagName | QueueSize | ObservabilityMode | Sampling Interval Milliseconds |
+ ||-|--|-|--|
+ | ns=3;s=FastUInt1000 | Tag 1000 | 5 | None | 1000 |
+ | ns=3;s=FastUInt1001 | Tag 1001 | 5 | None | 1000 |
+ | ns=3;s=FastUInt1002 | Tag 1002 | 10 | None | 5000 |
1. Select **Add tag or CSV > Import CSV (.csv) file**. Select the CSV file you created and select **Open**. The tags defined in the CSV file are imported:
iot-operations Quickstart Add Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-add-assets.md
Add two OPC UA tags on the **Add tags** page. To add each tag, select **Add tag
| Node ID | Tag name | Observability mode | | | -- | |
-| ns=3;s=FastUInt10 | temperature | none |
-| ns=3;s=FastUInt100 | Tag 10 | none |
+| ns=3;s=FastUInt10 | temperature | None |
+| ns=3;s=FastUInt100 | Tag 10 | None |
-The **Observability mode** is one of the following values: `none`, `gauge`, `counter`, `histogram`, or `log`.
+The **Observability mode** is one of the following values: `None`, `Gauge`, `Counter`, `Histogram`, or `Log`.
You can select **Manage default settings** to change the default sampling interval and queue size for each tag.
logic-apps Quickstart Create Example Consumption Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-example-consumption-workflow.md
This example uses an RSS trigger that checks an RSS feed, based on the specified
|-|-|-|-| | **The RSS feed URL** | Yes | <*RSS-feed-URL*> | The RSS feed URL to monitor. <br><br>This example uses the Wall Street Journal's RSS feed at **https://feeds.a.dj.com/rss/RSSMarketsMain.xml**. However, you can use any RSS feed that doesn't require HTTP authorization. Choose an RSS feed that publishes frequently, so you can easily test your workflow. | | **Chosen Property Will Be Used To Determine Which Items are New** | No | **PublishDate** | The property that determines which items are new. |
- | **Interval** | Yes | **1** | The number of intervals to wait between feed checks. <br><br>This example uses **1** as the interval. |
+ | **Interval** | Yes | **30** | The number of intervals to wait between feed checks. <br><br>This example uses **30** as the interval because this value is the [minimum interval for the **RSS** trigger](/connectors/rss/#general-limits). |
| **Frequency** | Yes | **Minute** | The unit of frequency to use for every interval. <br><br>This example uses **Minute** as the frequency. | | **Time Zone** | No | <*time-zone*> | The time zone to use for checking the RSS feed | | **Start Time** | No | <*start-time*> | The start time to use for checking the RSS feed |
This example uses an Office 365 Outlook action that sends an email each time tha
1. With the cursor still in the **Subject** box, select the dynamic content list (lightning icon).
- :::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-open-dynamic-content.png" alt-text="Screenshot shows action named Send an email, cursor in box named Subject, and selected option for dynamic content list." lightbox="media/quickstart-create-example-consumption-workflow/send-email-open-dynamic-content.png":::
+ :::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-open-dynamic-content.png" alt-text="Screenshot shows the action named Send an email, cursor in box named Subject, and selected option for dynamic content list." lightbox="media/quickstart-create-example-consumption-workflow/send-email-open-dynamic-content.png":::
1. From the dynamic content list that opens, under **When a feed item is published**, select **Feed title**, which is a trigger output that references the title for the RSS item.
This example uses an Office 365 Outlook action that sends an email each time tha
After you finish, the email subject looks like the following example:
- :::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-feed-title.png" alt-text="Screenshot shows action named Send an email, with example email subject and included property named Feed title." lightbox="media/quickstart-create-example-consumption-workflow/send-email-feed-title.png":::
+ :::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-feed-title.png" alt-text="Screenshot shows the action named Send an email, with example email subject and included property named Feed title." lightbox="media/quickstart-create-example-consumption-workflow/send-email-feed-title.png":::
> [!NOTE] >
This example uses an Office 365 Outlook action that sends an email each time tha
| `Date published:` | **Feed published on** | The item's publishing date and time | | `Link:` | **Primary feed link** | The URL for the item |
- :::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-body.png" alt-text="Screenshot shows action named Send an email, with descriptive text and properties in the box named Body." lightbox="media/quickstart-create-example-consumption-workflow/send-email-body.png":::
+ :::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-body.png" alt-text="Screenshot shows the action named Send an email, with descriptive text and properties in the box named Body." lightbox="media/quickstart-create-example-consumption-workflow/send-email-body.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
logic-apps Support Non Unicode Character Encoding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/support-non-unicode-character-encoding.md
description: Handle non-Unicode characters in Azure Logic Apps by converting tex
Previously updated : 01/04/2024 Last updated : 08/21/2024 # Support non-Unicode character encoding in Azure Logic Apps
Last updated 01/04/2024
When you work with text payloads, Azure Logic Apps infers the text is encoded in a Unicode format, such as UTF-8. You might have problems receiving, sending, or processing characters with different encodings in your workflow. For example, you might get corrupted characters in flat files when working with legacy systems that don't support Unicode.
-To work with text that has other character encoding, apply base64 encoding to the non-Unicode payload. This step prevents Logic Apps from assuming the text is in UTF-8 format. You can then convert any .NET-supported encoding to UTF-8 using Azure Functions.
+To work with text that has other character encoding, apply base64ToBinary encoding to the non-Unicode payload. This step prevents Azure Logic Apps from assuming the text is in UTF-8 format. You can then convert any .NET-supported encoding to UTF-8 using Azure Functions.
-This solution works with both *multi-tenant* and *single-tenant* workflows. You can also [use this solution with the AS2 connector](#convert-payloads-for-as2).
+This solution works with both *multitenant* and *single-tenant* workflows. You can also [use this solution with the AS2 connector](#convert-payloads-for-as2).
## Convert payload encoding
-First, check that your trigger can correctly identify the content type. This step ensures that Logic Apps no longer assumes the text is UTF-8.
+First, check that your trigger can correctly identify the content type. This step ensures that Azure Logic Apps no longer assumes the text is UTF-8.
-In triggers and actions that have the property **Infer Content Type**, select **No**. You can usually find this property in the operation's **Add parameters** list. However, if the operation doesn't include this property, the content type is set by the inbound message.
+In triggers and actions that have the property **Infer Content Type**, select **No**. You can usually find this property in the operation's **Advanced parameters** list. However, if the operation doesn't include this property, the content type is set by the inbound message.
The following list shows some of the connectors where you can disable automatically inferring the content type:+ * [OneDrive](/connectors/onedrive/) * [Azure Blob Storage](/connectors/azureblob/) * [Azure File Storage](/connectors/azurefile/)
The following list shows some of the connectors where you can disable automatica
* [Google Drive](/connectors/googledrive/) * [SFTP - SSH](/connectors/sftpwithssh/)
-If you're using the Request trigger for `text/plain` content, you must set the `charset` parameter that is in the call's `Content-Type` header. Otherwise, characters might become corrupted, or the parameter doesn't match the payload's encoding format. For more information, review [how to handle the `text/plain` content type](logic-apps-content-type.md#text-plain).
+If you're using the **Request** trigger for `text/plain` content, you must set the `charset` parameter that is in the call's `Content-Type` header. Otherwise, characters might become corrupted, or the parameter doesn't match the payload's encoding format. For more information, review [how to handle the `text/plain` content type](logic-apps-content-type.md#text-plain).
For example, the HTTP trigger converts the incoming content to UTF-8 when the `Content-Type` header is set with the correct `charset` parameter:
If you set the `Content-Type` header to `application/octet-stream`, you also mig
## Base64 encode content
-Before you [base64 encode](workflow-definition-language-functions-reference.md#base64) content to a string, make sure that you [converted the text to UTF-8](#convert-payload-encoding). Otherwise, characters might return corrupted.
+Before you [base64 encode](workflow-definition-language-functions-reference.md#base64) content to a string, make sure that you [convert the text to UTF-8](#convert-payload-encoding). Otherwise, characters might return corrupted.
Next, convert any .NET-supported encoding to another .NET-supported encoding. Review the [Azure Functions code example](#azure-functions-version) or the [.NET code example](#net-version):
Example output:
If you need to send a non-Unicode payload from your workflow, do the steps for [converting the payload to UTF-8](#convert-payload-encoding) in reverse. Keep the text in UTF-8 as long as possible within your system. Next, use the same function to convert the base64-encoded UTF-8 characters to the required encoding. Then, apply base64 decoding to the text, and send your payload.
+When you consume the return value from Azure Functions, make sure to use the [**base64ToBinary** function](workflow-definition-language-functions-reference.md#base64tobinary), not the **base64ToString** function.
+ ## Convert payloads for AS2 You can also use this solution with non-Unicode payloads in the [AS2 v2 connector](logic-apps-enterprise-integration-as2.md). If you don't convert payloads that you pass to AS2 to UTF-8, you might experience problems with the payload interpretation. These problems might result in a mismatch with the MIC hash between the partners because of misinterpreted characters.
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
Previously updated : 02/27/2024 Last updated : 08/21/2024 #Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
Create a workspace in the studio welcome page by selecting **Create workspace**.
+For more detailed information about creating a workspace, see [Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)](how-to-manage-workspace.md).
+ ## Compute A compute is a designated compute resource where you run your job or host your endpoint. Azure Machine Learning supports the following types of compute:
For the content of the file, see [compute YAML examples](https://github.com/Azur
+For more detailed information about creating compute, see:
+
+* [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md)
+* [Create an Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md)
+ ## Datastore Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. You can register and create a datastore to easily connect to your storage account, and access the data in your underlying storage service. The CLI v2 and SDK v2 support the following types of cloud-based storage
For more information, see [environment YAML schema](reference-yaml-environment.m
+For more detailed information about environments, see [Create and manage environments in Azure Machine Learning](how-to-manage-environments-v2.md).
+ ## Data Azure Machine Learning allows you to work with different types of data:
For most scenarios, you'll use URIs (`uri_folder` and `uri_file`) - a location i
An Azure Machine Learning [component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. Components are the building blocks of advanced machine learning pipelines. Components can do tasks such as data processing, model training, model scoring, and so on. A component is analogous to a function - it has a name, parameters, expects input, and returns output.
-## Next steps
+## Related content
* [How to upgrade from v1 to v2](how-to-migrate-from-v1.md) * [Train models with the v2 CLI and SDK](how-to-train-model.md)
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
Previously updated : 07/15/2023 Last updated : 08/15/2024 #Customer intent: I'm a data scientist with ML knowledge in the machine learning space, looking to build ML models using data in Azure Machine Learning with full control of the model training including debugging and monitoring of live jobs.
Interactive training is supported on **Azure Machine Learning Compute Clusters**
- Review [getting started with training on Azure Machine Learning](./how-to-train-model.md). - For more information, see this link for [VS Code](how-to-setup-vs-code.md) to set up the Azure Machine Learning extension. - Make sure your job environment has the `openssh-server` and `ipykernel ~=6.0` packages installed (all Azure Machine Learning curated training environments have these packages installed by default).-- Interactive applications can't be enabled on distributed training runs where the distribution type is anything other than Pytorch, TensorFlow or MPI. Custom distributed training setup (configuring multi-node training without using the above distribution frameworks) isn't currently supported.
+- Interactive applications can't be enabled on distributed training runs where the distribution type is anything other than Pytorch, TensorFlow, or MPI. Custom distributed training setup (configuring multi-node training without using the above distribution frameworks) isn't currently supported.
- To use SSH, you need an SSH key pair. You can use the `ssh-keygen -f "<filepath>"` command to generate a public and private key pair. ## Interact with your job container
By specifying interactive applications at job creation, you can connect directly
3. Follow the wizard to choose the environment you want to start the job.
-4. In **Job settings** step, add your training code (and input/output data) and reference it in your command to make sure it's mounted to your job.
+4. In the **Training script** step, add your training code (and input/output data) and reference it in your command to make sure it's mounted to your job.
:::image type="content" source="./media/interactive-jobs/sleep-command.png" alt-text="Screenshot of reviewing a drafted job and completing the creation.":::
By specifying interactive applications at job creation, you can connect directly
> [!NOTE] > If you use `sleep infinity`, you will need to manually [cancel the job](./how-to-interactive-jobs.md#end-job) to let go of the compute resource (and stop billing).
-5. Select at least one training application you want to use to interact with the job. If you don't select an application, the debug feature won't be available.
+5. In **Compute** settings, expand the option for **Training applications**. Select at least one training application you want to use to interact with the job. If you don't select an application, the debug feature won't be available.
:::image type="content" source="./media/interactive-jobs/select-training-apps.png" alt-text="Screenshot of selecting a training application for the user to use for a job.":::
To submit a job with a debugger attached and the execution paused, you can use d
> [!NOTE] > Private link-enabled workspaces are not currently supported when attaching a debugger to a job in VS Code.
-1. During job submission (either through the UI, the CLI or the SDK) use the debugpy command to run your python script. For example, the below screenshot shows a sample command that uses debugpy to attach the debugger for a tensorflow script (`tfevents.py` can be replaced with the name of your training script).
+1. During job submission (either through the UI, the CLI or the SDK) use the debugpy command to run your python script. For example, the following screenshot shows a sample command that uses debugpy to attach the debugger for a tensorflow script (`tfevents.py` can be replaced with the name of your training script).
:::image type="content" source="./media/interactive-jobs/use-debugpy.png" alt-text="Screenshot of interactive jobs configuration of debugpy":::
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
Previously updated : 02/02/2024 Last updated : 08/21/2024
providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/com
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
-To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. The following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
+To create or overwrite a named compute resource, you'll use a PUT request. In the following example, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, and `scaleSettings`. The following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
```bash curl -X PUT \
curl -X PUT \
"nodeIdleTimeBeforeScaleDown": "PT30M" } }
- },
- "userAccountCredentials": {
- "adminUserName": "<ADMIN_USERNAME>",
- "adminUserPassword": "<ADMIN_PASSWORD>"
} }' ```
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Environments created from v1 can be used in v2. In v2, environments have new fea
## Managing secrets
-The management of Key Vault secrets differs significantly in V2 compared to V1. The V1 set_secret and get_secret SDK methods are not available in V2. Instead, direct access using Key Vault client libraries should be used.
+The management of Key Vault secrets differs significantly in V2 compared to V1. The V1 set_secret and get_secret SDK methods are not available in V2. Instead, direct access using Key Vault client libraries should be used. When accessing secrets from a training script, you can use either the managed identity of the compute or your identity.
For details about Key Vault, see [Use authentication credential secrets in Azure Machine Learning training jobs](how-to-use-secrets-in-runs.md?view=azureml-api-2&preserve-view=true).
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
ml_client.begin_create_or_update(entity=compute)
Use the following information to configure **serverless compute** nodes with no public IP address in the VNet for a given workspace:
+> [!IMPORTANT]
+> If you are using a no public IP serverless compute and the workspace uses an IP allow list, you must add an outbound private endpoint to the workspace. The serverless compute needs to communicate with the workspace, but when configured for no public IP it uses the Azure Default Outbound for internet access. The public IP for this outbound is dynamic, and can't be added to the IP allow list. Creating an outbound private endpoint to the workspace allows traffic from the serverless compute bound for the workspace to bypass the IP allow list.
+ # [Azure CLI](#tab/cli) Create a workspace:
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Title: Authentication secrets
-description: Learn how to pass secrets to training jobs in secure fashion using Azure Key Vault.
+description: Learn how to securely get secrets from Azure Key Vault in your training jobs by using the Key Vault Secrets client library.
Previously updated : 01/19/2024 Last updated : 08/20/2024 -+
+# Customer intent: As a data scientist, I want to securely access secrets from Azure Key Vault in my training jobs so that I can use them in my training scripts.
# Use authentication credential secrets in Azure Machine Learning jobs
Before following the steps in this article, make sure you have the following pre
* (Optional) An Azure Machine Learning compute cluster configured to use a [managed identity](how-to-create-attach-compute-cluster.md?tabs=azure-studio#set-up-managed-identity). The cluster can be configured for either a system-assigned or user-assigned managed identity.
-* If your job will run on a compute cluster, grant the managed identity for the compute cluster access to the secrets stored in key vault. Or, if the job will run on serverless compute, grant the managed identity specified for the job access to the secrets. The method used to grant access depends on how your key vault is configured:
+* If your job runs on a compute cluster, grant the managed identity for the compute cluster access to the secrets stored in key vault. Or, if the job runs on serverless compute, grant the managed identity specified for the job access to the secrets. The method used to grant access depends on how your key vault is configured:
* [Azure role-based access control (Azure RBAC)](/azure/key-vault/general/rbac-guide): When configured for Azure RBAC, add the managed identity to the __Key Vault Secrets User__ role on your key vault. * [Azure Key Vault access policy](/azure/key-vault/general/assign-access-policy): When configured to use access policies, add a new policy that grants the __get__ operation for secrets and assign it to the managed identity.
Before following the steps in this article, make sure you have the following pre
> [!TIP] > The quickstart link is to the steps for using the Azure Key Vault Python SDK. In the table of contents in the left navigation area are links to other ways to set a key.
-## Getting secrets
+## Get secrets
+
+There are two ways to get secrets during training:
+
+- Using a managed identity associated with the compute resource the training job runs on.
+- Using your identity by having the compute run the job on your behalf.
+
+# [Managed identity](#tab/managed)
1. Add the `azure-keyvault-secrets` and `azure-identity` packages to the [Azure Machine Learning environment](concept-environments.md) used when training the model. For example, by adding them to the conda file used to build the environment.
Before following the steps in this article, make sure you have the following pre
print(secret.value) ```
-## Next steps
+# [Your identity](#tab/user)
+
+1. Add the `azure-keyvault-secrets`, `azure-identity`, and `azure-ai-ml` packages to the [Azure Machine Learning environment](concept-environments.md) used when training the model. For example, by adding them to the conda file used to build the environment.
+
+ The environment is used to build the Docker image that the training job runs in on the compute cluster.
+
+1. From your training code, use the [Azure Machine Learning SDK](/python/api/overview/azure/ai-ml-readme) and [Key Vault client library](/python/api/overview/azure/keyvault-secrets-readme) to get the managed identity credentials and authenticate to key vault. The `AzureMLOnBehalfOfCredential` class is used to authenticate on behalf of your user identity:
+
+ ```python
+ from azure.ai.ml.identity import AzureMLOnBehalfOfCredential
+ from azure.keyvault.secrets import SecretClient
+
+ credential = AzureMLOnBehalfOfCredential()
+ secret_client = SecretClient(vault_url="https://my-key-vault.vault.azure.net/", credential=credential)
+ ```
+
+ After authenticating, use the Key Vault client library to retrieve a secret by providing the associated key:
+
+ ```python
+ secret = secret_client.get_secret("secret-name")
+ print(secret.value)
+ ```
+
+1. When you submit the training job, you must specify that it runs on behalf of your identity by using `identity=UserIdentityConfiguration()`. The following example submits a job using this parameter:
+
+ ```python
+ from azure.ai.ml import Input, command
+ from azure.ai.ml.constants import AssetTypes
+ from azure.ai.ml.entities import UserIdentityConfiguration
+
+ job = command(
+ code="./sdk/ml/azure-ai-ml/samples/src",
+ command="python read_data.py --input_data ${{inputs.input_data}}",
+ inputs={"input_data": Input(type=AssetTypes.MLTABLE, path="./sample_data")},
+ environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1",
+ compute="cpu-cluster",
+ identity=UserIdentityConfiguration(),
+ )
+ ```
+
+ For an example of using the Azure CLI to submit a job that uses your identity, visit [Https://github.com/Azure/azureml-examples/blob/d4c90eead3c1fd97393d0657f7a78831490adf1c/cli/jobs/single-step/on-behalf-of/README.md](https://github.com/Azure/azureml-examples/blob/d4c90eead3c1fd97393d0657f7a78831490adf1c/cli/jobs/single-step/on-behalf-of/README.md).
+++
+## Related content
For an example of submitting a training job using the Azure Machine Learning Python SDK v2, see [Train models with the Python SDK v2](how-to-train-sdk.md).
machine-learning Reference Yaml Connection Ai Content Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-ai-content-safety.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Ai Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-ai-search.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-ai-services.md
Previously updated : 05/09/2024 Last updated : 08/21/2024
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Api Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-api-key.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-azure-openai.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-blob.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: account key
machine-learning Reference Yaml Connection Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-container-registry.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: Microsoft Entra ID managed identity
machine-learning Reference Yaml Connection Custom Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-custom-key.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: custom key
machine-learning Reference Yaml Connection Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-data-lake.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: service principal
machine-learning Reference Yaml Connection Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-git.md
The `az ml connection` commands can be used to manage both Azure Machine Learnin
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: Personal access token
machine-learning Reference Yaml Connection Onelake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-onelake.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: service principal
machine-learning Reference Yaml Connection Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-openai.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Python Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-python-feed.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: Personal access token
machine-learning Reference Yaml Connection Serp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-serp.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-serverless.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
machine-learning Reference Yaml Connection Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-speech.md
While the `az ml connection` commands can be used to manage both Azure Machine L
## Examples
-Visit [this GitHub resource]() for examples. Several are shown here. These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
+These examples would be in the form of YAML files and used from the CLI. For example, `az ml connection create -f <file-name>.yaml`.
### YAML: API key
managed-grafana How To Connect To Data Source Privately https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md
While managed private endpoints are free, there may be charges associated with p
Managed private endpoints work with Azure services that support private link. Using them, you can connect your Azure Managed Grafana workspace to the following Azure data stores over private connectivity: -- Azure Cosmos DB for Mongo DB
+- Azure Cosmos DB for Mongo DB ([Only for Request Unit (RU) architecture](/azure/cosmos-db/mongodb/introduction#request-unit-ru-architecture))
- Azure Cosmos DB for PostgreSQL - Azure Data Explorer - Azure Monitor private link scope (for example, Log Analytics workspace)
migrate Tutorial Discover Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md
ms. Previously updated : 03/12/2024 Last updated : 08/21/2024
After you have performed server discovery and software inventory using the Azure
- | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 8 GB RAM, with 30 GB storage, 4 Core CPU
- **Network Requirements** | Access to the following endpoints: <br/><br/>*.docker.io <br/></br>*.docker.com <br/><br/>api.snapcraft.io <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/network-requirements.md) <br/><br/>[Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
+ **Network Requirements** | Access to the following endpoints: <br/><br/> *.docker.io <br/></br> *.docker.com <br/><br/>api.snapcraft.io <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> https://legoonboarding.blob.core.windows.net </br></br> [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/network-requirements.md) <br/><br/>[Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
After copying the script, you can go to your Linux server, save the script as *Deploy.sh* on the server.
After copying the script, you can go to your Linux server, save the script as *D
- | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 6 GB RAM, with 30 GB storage on root volume, 4 Core CPU
- **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
+ **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> https://legoonboarding.blob.core.windows.net <br/><br/> [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
5. After copying the script, go to your Linux server, save the script as *Deploy.sh* on the server.
programmable-connectivity Azure Programmable Connectivity Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/programmable-connectivity/azure-programmable-connectivity-availability.md
+
+ Title: Availability and redundancy
+
+description: Azure Programmable Connectivity availability and redundancy.
++++ Last updated : 08/21/2024+++
+# Availability and redundancy
+
+Azure Programmable Connectivity (APC) is a regional service that can withstand and automatically handle loss of one of the datacenters within a region. However it doesn't automatically
+fail over or replicate data if there's a full regional outage. Customers must take extra actions to implement regional redundancy.
+
+## Enabling regional redundancy
+
+To enable regional redundancy, customers must manually create multiple APC gateways in the regions of their choice. The gateways must have similar sets of Network APIs enabled. Lastly, the calling code should be configured with the URLs of all provisioned gateways and invoke appropriate resiliency strategies.
+
+> [!IMPORTANT]
+> Curretnly, APC does not guarantee presence of all possible combinations of Network APIs and Operators in all regions.
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
Research the following components and verify support for them in the [Data Conne
1. Pagination options to the data source
-We also recommend a tool like Postman to validate the data connector components. For more information, see [Use Postman with the Microsoft Graph API](/graph/use-postman).
+We also recommend testing your components with an API testing tool like one of the following:
+
+ - [Visual Studio Code](https://code.visualstudio.com/download) with an [extension from Visual Studio Marketplace](https://marketplace.visualstudio.com/vscode)
+ - [PowerShell Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod)
+ - [Microsoft Edge - Network Console tool](/microsoft-edge/devtools-guide-chromium/network-console/network-console-tool)
+ - [Bruno](https://www.usebruno.com/)
+ - [curl](https://curl.se/)
+
+ > [!CAUTION]
+ > For scenarios where you have sensitive data, such as credentials, secrets, access tokens,
+ > API keys, and other similar information, make sure to use a tool that protects your data
+ > with the necessary security features, works offline or locally, doesn't sync your data to
+ > the cloud, and doesn't require that you sign in to an online account. This way, you reduce
+ > the risk around exposing sensitive data to the public.
## Build the data connector
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
This article discusses *Shared Access Signatures (SAS)*, how they work, and how
SAS guards access to Service Bus based on authorization rules that are configured either on a namespace, or a messaging entity (queue, or topic). An authorization rule has a name, is associated with specific rights, and carries a pair of cryptographic keys. You use the rule's name and key via the Service Bus SDK or in your own code to generate a SAS token. A client can then pass the token to Service Bus to prove authorization for the requested operation. > [!NOTE]
-> Azure Service Bus supports authorizing access to a Service Bus namespace and its entities using Microsoft Entra ID. Authorizing users or applications using OAuth 2.0 token returned by Microsoft Entra ID provides superior security and ease of use over shared access signatures (SAS). With Microsoft Entra ID, there is no need to store the tokens in your code and risk potential security vulnerabilities.
->
-> Microsoft recommends using Microsoft Entra ID with your Azure Service Bus applications when possible. For more information, see the following articles:
-> - [Authenticate and authorize an application with Microsoft Entra ID to access Azure Service Bus entities](authenticate-application.md).
-> - [Authenticate a managed identity with Microsoft Entra ID to access Azure Service Bus resources](service-bus-managed-service-identity.md)
+> Azure Service Bus supports authorizing access to a Service Bus namespace and its entities using Microsoft Entra ID. Authorizing users or applications using OAuth 2.0 token returned by Microsoft Entra ID provides superior security and ease of use over shared access signatures (SAS). SAS Keys lack fine grained access control, are difficult to manage/rotate and do not have the audit capabilities to associate its use with a specific user or service principal. For these reasons we recommend using Microsoft Entra ID.
>
-> You can disable local or SAS key authentication for a Service Bus namespace and allow only Microsoft Entra authentication. For step-by-step instructions, see [Disable local authentication](disable-local-authentication.md).
+> Microsoft recommends using Microsoft Entra ID with your Azure Service Bus applications when possible. For more information, see the following articles:
+- [Authenticate and authorize an application with Microsoft Entra ID to access Azure Service Bus entities](authenticate-application.md).
+- [Authenticate a managed identity with Microsoft Entra ID to access Azure Service Bus resources](service-bus-managed-service-identity.md)
+> You can disable local or SAS key authentication for a Service Bus namespace and allow only Microsoft Entra authentication. For step-by-step instructions, see [Disable local authentication](disable-local-authentication.md).
## Overview of SAS Shared Access Signatures are a claims-based authorization mechanism using simple tokens. When you use SAS, keys are never passed on the wire. Keys are used to cryptographically sign information that can later be verified by the service. SAS can be used similar to a username and password scheme where the client is in immediate possession of an authorization rule name and a matching key. SAS can also be used similar to a federated security model, where the client receives a time-limited and signed access token from a security token service without ever coming into possession of the signing key.
site-recovery Site Recovery Deployment Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md
Previously updated : 08/07/2024 Last updated : 08/21/2024 # About the Azure Site Recovery Deployment Planner for VMware to Azure+ This article is the Azure Site Recovery Deployment Planner user guide for VMware to Azure production deployments. ## Overview
The tool provides the following details:
>[!IMPORTANT]
->
>Because usage is likely to increase over time, all the preceding tool calculations are performed assuming a 30 percent growth factor in workload characteristics. The calculations also use a 95th percentile value of all the profiling metrics, such as read/write IOPS and churn. Both growth factor and percentile calculation are configurable. To learn more about growth factor, see the "Growth-factor considerations" section. To learn more about percentile value, see the "Percentile value used for the calculation" section.
->
+ ## Support matrix | **Category** | **VMware to Azure** |**Hyper-V to Azure**|**Azure to Azure**|**Hyper-V to secondary site**|**VMware to secondary site** | |--|--|--|--|--|--| | Supported scenarios |Yes|Yes|No|Yes*|No |
-| Supported version | vCenter Server 7.0, 6.7, 6.5, 6.0 or 5.5| Windows Server 2016, Windows Server 2012 R2 | NA |Windows Server 2016, Windows Server 2012 R2|NA |
+| Supported version | vCenter Server 8.0, 7.0, 6.7, and 6.5| Windows Server 2016, Windows Server 2012 R2 | NA |Windows Server 2016, Windows Server 2012 R2|NA |
| Supported configuration|vCenter Server, ESXi| Hyper-V cluster, Hyper-V host|NA|Hyper-V cluster, Hyper-V host|NA | | Number of servers that can be profiled per running instance of Site Recovery Deployment Planner |Single (VMs belonging to one vCenter Server or one ESXi server can be profiled at a time)|Multiple (VMs across multiple hosts or host clusters can be profiled at a time)| NA |Multiple (VMs across multiple hosts or host clusters can be profiled at a time)| NA | *The tool is primarily for the Hyper-V to Azure disaster recovery scenario. For Hyper-V to secondary site disaster recovery, it can be used only to understand source-side recommendations like required network bandwidth, required free storage space on each of the source Hyper-V servers, and initial replication batching numbers and batch definitions. Ignore the Azure recommendations and costs from the report. Also, the Get Throughput operation is not applicable for the Hyper-V-to-secondary-site disaster recovery scenario. ## Prerequisites+ The tool has two main phases: profiling and report generation. There is also a third option to calculate throughput only. The requirements for the server from which the profiling and throughput measurement is initiated are presented in the following table. | Server requirement | Description|
The tool has two main phases: profiling and report generation. There is also a t
| User permissions | Read-only permission for the user account that's used to access the VMware vCenter server/VMware vSphere ESXi host during profiling | > [!NOTE]
->
>The tool can profile only VMs with VMDK and RDM disks. It can't profile VMs with iSCSI or NFS disks. Site Recovery does support iSCSI and NFS disks for VMware servers. Because the deployment planner isn't inside the guest and it profiles only by using vCenter performance counters, the tool doesn't have visibility into these disk types.
->
+ ## Download and extract the deployment planner tool+ 1. Download the latest version of [Site Recovery Deployment Planner](https://download.microsoft.com/download/6/5/d/65d39a90-c4e2-49a7-9149-af58deb8ef7d/ASRDeploymentPlanner-v3.0.zip). The tool is packaged in a .zip folder. The current version of the tool supports only the VMware to Azure scenario.
If you have a previous version of Deployment Planner, do either of the following
>[!NOTE]
- >
>When you start profiling with the new version, pass the same output directory path so that the tool appends profile data on the existing files. A complete set of profiled data is used to generate the report. If you pass a different output directory, new files are created and old profiled data isn't used to generate the report. > >Each new Deployment Planner version is a cumulative update of the .zip file. You don't need to copy the newest files to the previous folder. You can create and use a new folder. ## Version history+ The latest Site Recovery Deployment Planner tool version is 2.5. See the [Site Recovery Deployment Planner version history](./site-recovery-deployment-planner-history.md) page for the fixes that are added in each update. ## Next steps+ [Run Site Recovery Deployment Planner](site-recovery-vmware-deployment-planner-run.md)
storage Blob Upload Function Trigger Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md
Title: Upload and analyze a file with Azure Functions (JavaScript) and Blob Storage
+ Title: Upload, analyze files with Azure Functions and Blob Storage
description: With JavaScript, learn how to upload an image to Azure Blob Storage and analyze its content using Azure Functions and Azure AI services Previously updated : 07/06/2023 Last updated : 08/20/2024 ms.devlang: javascript
-# As a JavaScript developer, I want to know how to upload files to blob storage within an application, so that I can adopt this functionality into my own solution.
+#Customer intent: As a JavaScript developer, I want to know how to upload files to blob storage within an application, so that I can adopt this functionality into my own solution.
- # JavaScript Tutorial: Upload and analyze a file with Azure Functions and Blob Storage
-In this tutorial, you'll learn how to upload an image to Azure Blob Storage and process it using Azure Functions, Computer Vision, and Cosmos DB. You'll also learn how to implement Azure Function triggers and bindings as part of this process. Together, these services analyze an uploaded image that contains text, extract the text out of it, and then store the text in a database row for later analysis or other purposes.
+In this tutorial, you learn how to upload an image to Azure Blob Storage and process it using Azure Functions, Computer Vision, and Cosmos DB. You'll also learn how to implement Azure Function triggers and bindings as part of this process. Together, these services analyze an uploaded image that contains text, extract the text out of it, and then store the text in a database row for later analysis or other purposes.
Azure Blob Storage is Microsoft's massively scalable object storage solution for the cloud. Blob Storage is designed for storing images and documents, streaming media files, managing backup and archive data, and much more. You can read more about Blob Storage on the [overview page](./storage-blobs-introduction.md).
In this tutorial, learn how to:
## Create the storage account and container+ The first step is to create the storage account that will hold the uploaded blob data, which in this scenario will be images that contain text. A storage account offers several different services, but this tutorial utilizes Blob Storage only. ### [Visual Studio Code](#tab/storage-resource-visual-studio-code)
To run the project locally, enter the environment variables in the `./local.sett
Although the Azure Function code runs locally, it connects to the cloud-based services for Storage, rather than using any local emulators.
-```
+```json
{ "IsEncrypted": false, "Values": {
Use the following table to help troubleshoot issues during this procedure.
|--|--| |`await computerVisionClient.read(url);` errors with `Only absolute URLs are supported`|Make sure your `ComputerVisionEndPoint` endpoint is in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com/`.|
-## Clean up resources
-
-If you're not going to continue to use this application, you can delete the resources you created by removing the resource group.
-
-1. Select **Resource groups** from the Azure explorer
-1. Find and right-click the `msdocs-storage-function` resource group from the list.
-1. Select **Delete**. The process to delete the resource group may take a few minutes to complete.
- ## Security considerations This solution, as a beginner tutorial, doesn't demonstrate secure-by-default practices. This is intentional to allow you to be successful in deploying the solution. The next step after that successful deployment is to secure the resources. This solution uses three Azure services, each has its own security features and considerations for secure-by-default configuration:
This solution, as a beginner tutorial, doesn't demonstrate secure-by-default pra
* [Azure Functions sample code](https://github.com/Azure-Samples/msdocs-storage-bind-function-service/blob/main/javascript-v4)
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the resources you created by removing the resource group.
+
+1. Select **Resource groups** from the Azure explorer
+1. Find and right-click the `msdocs-storage-function` resource group from the list.
+1. Select **Delete**. The process to delete the resource group may take a few minutes to complete.
+ ## Related content * [Create a function app that connects to Azure services using identities instead of secrets](/azure/azure-functions/functions-identity-based-connections-tutorial)
storage Storage Blob Container Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about creating a container using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_container.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_container_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for creating a container use the following REST API operation: - [Create Container](/rest/api/storageservices/create-container) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_container.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_container_async.py) code samples from this article (GitHub)
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about deleting a container using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_container.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_container_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for deleting or restoring a container use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Delete Container](/rest/api/storageservices/delete-container) (REST API) - [Restore Container](/rest/api/storageservices/restore-container) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_container.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_container_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Soft delete for containers](soft-delete-container-overview.md) - [Enable and manage soft delete for containers](soft-delete-container-enable.md)+
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about leasing a container using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_container.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_container_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for leasing a container use the following REST API operation: - [Lease Container](/rest/api/storageservices/lease-container) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_container.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_container_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ## See also - [Managing Concurrency in Blob storage](concurrency-manage.md)+
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_container_properties_metadata.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_container_properties_metadata_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
The `get_container_properties` method retrieves container properties and metadata by calling both the [Get Container Properties](/rest/api/storageservices/get-container-properties) operation and the [Get Container Metadata](/rest/api/storageservices/get-container-metadata) operation.
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_container_properties_metadata.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_container_properties_metadata_async.py) code samples from this article (GitHub)
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
The following code example shows how to use the user delegation SAS created in t
To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library method for getting a user delegation key uses the following REST API operations: - [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md) - [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)+
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about listing containers using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_containers.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_containers_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for listing containers use the following REST API operation: - [List Containers](/rest/api/storageservices/list-containers2) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_containers.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_containers_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)]
-## See also
+### See also
+
+- [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)
-- [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)
storage Storage Blob Copy Async Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md
This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blo
To learn more about copying blobs with asynchronous scheduling using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_copy_blob.py)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods covered in this article use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API) - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_copy_blob.py)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)]+
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/pu
- [Client library reference documentation](/python/api/azure-storage-blob) - [Client library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) - [Package (PyPi)](https://pypi.org/project/azure-storage-blob/)+
storage Storage Blob Copy Url Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about copying blobs using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_copy_put_from_url.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_copy_put_from_url_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods covered in this article use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API) - [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_copy_put_from_url.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_copy_put_from_url_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)]+
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
With this basic setup in place, you can implement other examples in this article
## Resources
-To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for Python, see the following resources.
+To learn more about how to delete blobs and restore soft-deleted blobs using the Azure Blob Storage client library for Python, see the following resources.
+
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_blobs.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_blobs_async.py) code samples from this article (GitHub)
### REST API operations
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Delete Blob](/rest/api/storageservices/delete-blob) (REST API) - [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_blobs.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_delete_blobs_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Soft delete for blobs](soft-delete-blob-overview.md)-- [Blob versioning](versioning-overview.md)
+- [Blob versioning](versioning-overview.md)
+
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about how to download blobs using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_download.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_download_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for downloading blobs use the following REST API operation: - [Get Blob](/rest/api/storageservices/get-blob) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_download.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_download_async.py) code samples from this article (GitHub)
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about managing blob leases using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_blobs.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_blobs_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for managing blob leases use the following REST API operation: - [Lease Blob](/rest/api/storageservices/lease-blob)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_blobs.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_lease_blobs_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also -- [Managing Concurrency in Blob storage](concurrency-manage.md)
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
+
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API) - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags_async.py) code samples from this article (GitHub)
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
Follow these steps to use the asynchronous APIs in your project:
from azure.storage.blob.aio import BlobServiceClient, BlobClient, ContainerClient ```
- The `import asyncio` statement is only required if you're using the library in your code. It's added here for clarity, as the examples in the [developer guide articles](#build-your-application) use the `asyncio` library.
+ The `import asyncio` statement is only required if you're using the library in your code. It's added here for clarity, as the examples in the [developer guide articles](#build-your-app) use the `asyncio` library.
- Create a client object using `async with` to begin working with data resources. Only the top level client needs to use `async with`, as other clients created from it share the same connection pool. In this example, we create a `BlobServiceClient` object using `async with`, and then create a `ContainerClient` object:
Blob async client library information:
## Authorize access and connect to Blob Storage
-To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
+To connect an app to Blob Storage, create an instance of the [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
To learn more about creating and managing client objects, including best practices, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
You can authorize a `BlobServiceClient` object by using a Microsoft Entra author
## [Microsoft Entra ID (recommended)](#tab/azure-ad)
-To authorize with Microsoft Entra ID, you need to use a [security principal](../../active-directory/develop/app-objects-and-service-principals.md). Which type of security principal you need depends on where your application runs. Use the following table as a guide:
+To authorize with Microsoft Entra ID, you need to use a [security principal](../../active-directory/develop/app-objects-and-service-principals.md). Which type of security principal you need depends on where your app runs. Use the following table as a guide:
-| Where the application runs | Security principal | Guidance |
+| Where the app runs | Security principal | Guidance |
| | | | | Local machine (developing and testing) | Service principal | To learn how to register the app, set up a Microsoft Entra group, assign roles, and configure environment variables, see [Authorize access using developer service principals](/azure/developer/python/sdk/authentication-local-development-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) | | Local machine (developing and testing) | User identity | To learn how to set up a Microsoft Entra group, assign roles, and sign in to Azure, see [Authorize access using developer credentials](/azure/developer/python/sdk/authentication-local-development-dev-accounts?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
For information about how to obtain account keys and best practice guidelines fo
-## Build your application
+## Build your app
-As you build applications to work with data resources in Azure Blob Storage, your code primarily interacts with three resource types: storage accounts, containers, and blobs. To learn more about these resource types, how they relate to one another, and how apps interact with resources, see [Understand how apps interact with Blob Storage data resources](storage-blob-object-model.md).
+As you build apps to work with data resources in Azure Blob Storage, your code primarily interacts with three resource types: storage accounts, containers, and blobs. To learn more about these resource types, how they relate to one another, and how apps interact with resources, see [Understand how apps interact with Blob Storage data resources](storage-blob-object-model.md).
-The following guides show you how to work with data resources and perform specific actions using the Azure Storage client library for Python:
+The following guides show you how to access data and perform specific actions using the Azure Storage client library for Python:
| Guide | Description |
-|--||
-| [Create a container](storage-blob-container-create-python.md) | Create containers. |
-| [Delete and restore containers](storage-blob-container-delete-python.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. |
-| [List containers](storage-blob-containers-list-python.md) | List containers in an account and the various options available to customize a listing. |
-| [Manage properties and metadata (containers)](storage-blob-container-properties-metadata-python.md) | Get and set properties and metadata for containers. |
-| [Create and manage container leases](storage-blob-container-lease-python.md) | Establish and manage a lock on a container. |
+| | |
+| [Configure a retry policy](storage-retry-policy-python.md) | Implement retry policies for client operations. |
+| [Copy blobs](storage-blob-copy-python.md) | Copy a blob from one location to another. |
+| [Create a container](storage-blob-container-create-python.md) | Create blob containers. |
+| [Create a user delegation SAS (blobs)](storage-blob-user-delegation-sas-create-python.md) | Create a user delegation SAS for a blob. |
+| [Create a user delegation SAS (containers))](storage-blob-container-user-delegation-sas-create-python.md) | Create a user delegation SAS for a container. |
| [Create and manage blob leases](storage-blob-lease-python.md) | Establish and manage a lock on a blob. |
-| [Upload blobs](storage-blob-upload-python.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. |
+| [Create and manage container leases](storage-blob-container-lease-python.md) | Establish and manage a lock on a container. |
+| [Delete and restore](storage-blob-delete-python.md) | Delete blobs and restore soft-deleted blobs. |
+| [Delete and restore containers](storage-blob-container-delete-python.md) | Delete containers and restore soft-deleted containers. |
| [Download blobs](storage-blob-download-python.md) | Download blobs by using strings, streams, and file paths. |
-| [Copy blobs](storage-blob-copy-python.md) | Copy a blob from one location to another. |
-| [List blobs](storage-blobs-list-python.md) | List blobs in different ways. |
-| [Delete and restore](storage-blob-delete-python.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
| [Find blobs using tags](storage-blob-tags-python.md) | Set and retrieve tags, and use tags to find blobs. |
+| [List blobs](storage-blobs-list-python.md) | List blobs in different ways. |
+| [List containers](storage-blob-containers-list-python.md) | List containers in an account and the various options available to customize a listing. |
| [Manage properties and metadata (blobs)](storage-blob-properties-metadata-python.md) | Get and set properties and metadata for blobs. |
+| [Manage properties and metadata (containers)](storage-blob-container-properties-metadata-python.md) | Get and set properties and metadata for containers. |
+| [Performance tuning for data transfers](storage-blobs-tune-upload-download-python.md) | Optimize performance for data transfer operations. |
| [Set or change a blob's access tier](storage-blob-use-access-tier-python.md) | Set or change the access tier for a block blob. |
+| [Upload blobs](storage-blob-upload-python.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. |
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for managing and using blob index tags use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_blobs_properties_metadata_tags_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
+- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
+
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about uploading blobs using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_upload.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_upload_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for uploading blobs use the following REST API operations:
The Azure SDK for Python contains libraries that build on top of the Azure REST
- [Put Blob](/rest/api/storageservices/put-blob) (REST API) - [Put Block](/rest/api/storageservices/put-block) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_upload.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_upload_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
+- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
+
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
The following code example shows how to use the user delegation SAS created in t
To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library method for getting a user delegation key uses the following REST API operations: - [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_create_sas.py)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md) - [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)+
storage Storage Blobs List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md
With this basic setup in place, you can implement other examples in this article
To learn more about how to list blobs using the Azure Blob Storage client library for Python, see the following resources.
+### Code samples
+
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_blobs.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_blobs_async.py) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for listing blobs use the following REST API operation: - [List Blobs](/rest/api/storageservices/list-blobs) (REST API)
-### Code samples
--- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_blobs.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob_devguide_list_blobs_async.py) code samples from this article (GitHub)- [!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)] ### See also - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)-- [Blob versioning](versioning-overview.md)
+- [Blob versioning](versioning-overview.md)
+
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
During a download, the Storage client libraries make one download range request
## Next steps
+- This article is part of the Blob Storage developer guide for Python. See the full list of developer guide articles at [Build your app](storage-blob-python-get-started.md#build-your-app).
- To understand more about factors that can influence performance for Azure Storage operations, see [Latency in Blob storage](storage-blobs-latency.md). - To see a list of design considerations to optimize performance for apps using Blob storage, see [Performance and scalability checklist for Blob storage](storage-performance-checklist.md).
storage Storage Retry Policy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-python.md
The following code example shows how to configure the retry options using an ins
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob_devguide_retry.py" id="Snippet_retry_linear":::
-## Related content
+## Next steps
+- This article is part of the Blob Storage developer guide for Python. See the full list of developer guide articles at [Build your app](storage-blob-python-get-started.md#build-your-app).
- For architectural guidance and general best practices for retry policies, see [Transient fault handling](/azure/architecture/best-practices/transient-faults). - For guidance on implementing a retry pattern for transient failures, see [Retry pattern](/azure/architecture/patterns/retry).
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
The Azure Storage platform includes the following data
- [Azure Queues](../queues/storage-queues-introduction.md): A messaging store for reliable messaging between application components. - [Azure Tables](../tables/table-storage-overview.md): A NoSQL store for schemaless storage of structured data. - [Azure managed Disks](../../virtual-machines/managed-disks-overview.md): Block-level storage volumes for Azure VMs.-- [Azure Container Storage](../container-storage/container-storage-introduction.md) (preview): A volume management, deployment, and orchestration service built natively for containers.
+- [Azure Container Storage](../container-storage/container-storage-introduction.md): A volume management, deployment, and orchestration service built natively for containers.
Each service is accessed through a storage account with a unique address. To get started, see [Create a storage account](storage-account-create.md).
Elastic SAN is designed for large scale IO-intensive workloads and top tier data
For more information about Azure Elastic SAN, see [What is Azure Elastic SAN?](../elastic-san/elastic-san-introduction.md).
-## Azure Container Storage (preview)
+## Azure Container Storage
Azure Container Storage integrates with Kubernetes and utilizes existing Azure Storage offerings for actual data storage, offering a volume orchestration and management solution purposely built for containers. You can choose any of the supported backing storage options to create a storage pool for your persistent volumes.
Azure Container Storage offers substantial benefits:
- Kubernetes-native volume orchestration
-For more information about Azure Container Storage, see [What is Azure Container Storage? (preview)](../container-storage/container-storage-introduction.md).
+For more information about Azure Container Storage, see [What is Azure Container Storage? ](../container-storage/container-storage-introduction.md).
## Queue Storage
Azure Table Storage is now part of Azure Cosmos DB. To see Azure Table Storage d
For more information about Table Storage, see [Overview of Azure Table Storage](../tables/table-storage-overview.md). ## Disk Storage- An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical disk in an on-premises server but, virtualized. Azure-managed disks are stored as page blobs, which are a random IO storage object in Azure. We call a managed disk 'managed' because it's an abstraction over page blobs, blob containers, and Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of the rest. For more information about managed disks, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md).
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
To delete an Azure file share, you can use the Azure portal, Azure PowerShell, o
:::image type="content" source="media/storage-how-to-create-file-share/delete-file-share.png" alt-text="Screen shot of the Azure portal procedure for deleting a file share." border="true" lightbox="media/storage-how-to-create-file-share/delete-file-share.png"::: # [PowerShell](#tab/azure-powershell)
-1. Log in to your Azure account. To use multi-factor authentication, you'll need to supply your Azure tenant ID.
+1. Run the following script. Replace `<ResourceGroup>`, `<StorageAccount>`, and `<FileShare>` with your information.
```azurepowershell
- Login-AzAccount -TenantId <YourTenantID>
- ```
-
-1. Run the following script. Replace `<YourStorageAccountName>`, `<YourStorageAccountKey>`, and `<FileShareName>` with your information. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` cmdlet.
-
- ```azurepowershell
- $context = New-AzStorageContext -StorageAccountName <YourStorageAccountName> -StorageAccountKey <YourStorageAccountKey>
- Remove-AzStorageShare -Context $context -Name "<FileShareName>"
+ Remove-AzRmStorageShare `
+ -ResourceGroupName <ResourceGroup> `
+ -StorageAccountName <StorageAccount> `
+ -Name <FileShare>
``` # [Azure CLI](#tab/azure-cli)
-You can delete an Azure file share with the [`az storage share delete`](/cli/azure/storage/share#az-storage-share-delete) command. Replace `<yourFileShareName>` and `<yourStorageAccountName>` with your information.
+You can delete an Azure file share with the [`az storage share delete`](/cli/azure/storage/share#az-storage-share-delete) command. Replace `<ResourceGroup>`, `<StorageAccount>`, and `<FileShare>` with your information.
```azurecli-
-az storage share delete \
- --name <yourFileShareName> \
- --account-name <yourStorageAccountName>
+az storage share-rm delete \
+ --resource-group <ResourceGroup> \
+ --storage-account <StorageAccount> \
+ --name <FileShare>
```
synapse-analytics Workspace Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspace-conditional-access.md
You can now configure conditional access policies for Azure Synapse workspaces.
## Configure conditional access The following steps show how to configure a conditional access policy for Azure Synapse workspaces.
-1. Sign in to the Azure portal using an account with *global administrator permissions*, select **Microsoft Entra ID**, choose **Security** from the menu.
+1. Sign in to the Azure portal using an account with [conditional access administrator permissions](/entra/identity/role-based-access-control/permissions-reference#conditional-access-administrator), and select **Microsoft Entra ID**, choose **Security** from the menu.
2. Select **Conditional Access**, then choose **+ New Policy**, and provide a name for the policy. 3. Under **Assignments**, select **Users and groups**, check the **Select users and groups** option, and then select a Microsoft Entra user or group for Conditional access. Click Select, and then click Done. 4. Select **Cloud apps**, click **Select apps**. Select **Microsoft Azure Synapse Gateway**. Then click Select and Done.
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
To resolve this issue, start the RDAgent boot loader:
## Error: INVALID_REGISTRATION_TOKEN or EXPIRED_MACHINE_TOKEN
-On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **INVALID_REGISTRATION_TOKEN** or **EXPIRED_MACHINE_TOKEN** in the description, the registration token that has been used isn't recognized as valid.
+On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with the description `INVALID_REGISTRATION_TOKEN` or `EXPIRED_MACHINE_TOKEN`, the registration key that has been used isn't recognized as valid.
-To resolve this issue, create a valid registration token:
-
-1. To create a new registration token, follow the steps in the [Generate a new registration key for the VM](#step-3-generate-a-new-registration-key-for-the-vm) section.
-
-1. Open Registry Editor.
-
-1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.
+To resolve this issue:
-1. Select **IsRegistered**.
+1. Create a new registration key by following the steps in [Generate a registration key](add-session-hosts-host-pool.md#generate-a-registration-key).
-1. In the **Value data:** entry box, type **0** and select **Ok**.
+1. Open a PowerShell prompt as an administrator and run the following commands to add the new registration key to the registry. Replace `<RegistrationToken>` with the new registration token you generated.
-1. Select **RegistrationToken**.
+ ```powershell
+ $newKey = '<RegistrationToken>'
+
+ Set-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\RDInfraAgent" -Name "IsRegistered" -Value 0 -Force
+ Set-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\RDInfraAgent" -Name "RegistrationToken" -Value $newKey -Force
+ ```
-1. In the **Value data:** entry box, paste the registration token from step 1.
+1. Next, run the following command to restart the `RDAgentBootLoader` service:
- > [!div class="mx-imgBorder"]
- > ![Screenshot of IsRegistered 0](media/isregistered-token.png)
+ ```powershell
+ Restart-Service RDAgentBootLoader
+ ```
-1. Open a PowerShell prompt as an administrator and run the following command to restart the RDAgentBootLoader service:
+1. Run the following commands to verify that **IsRegistered** is set to 1 and **RegistrationToken** is blank.
```powershell
- Restart-Service RDAgentBootLoader
+ Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\RDInfraAgent" -Name IsRegistered | FL IsRegistered
+ Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\RDInfraAgent" -Name RegistrationToken | FL RegistrationToken
```
-1. Go back to Registry Editor.
+ The output should be similar to the following output:
-1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**.
+ ```output
+ IsRegistered : 1
-1. Verify that **IsRegistered** is set to 1 and there's nothing in the data column for **RegistrationToken**.
+ RegistrationToken :
+ ```
- > [!div class="mx-imgBorder"]
- > ![Screenshot of IsRegistered 1](media/isregistered-registry.png)
+1. Check your session host is no available in the host pool. If it isn't, view the Event Viewer entries and see if there are any errors that are preventing the agent from starting.
## Error: Agent cannot connect to broker with INVALID_FORM
virtual-desktop Troubleshoot Vm Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-vm-configuration.md
When the Azure Virtual Desktop Agent is first installed on session host VMs (eit
1. If there's already a registration token, remove it with Remove-AzWvdRegistrationInfo. 2. Run the **New-AzWvdRegistrationInfo** cmdlet to generate a new token.
-3. Confirm that the *-ExpriationTime* parameter is set to three days.
+3. Confirm that the *-ExpirationTime* parameter is set to three days.
### Error: Azure Virtual Desktop agent isn't reporting a heartbeat when running Get-AzWvdSessionHost
virtual-desktop Troubleshoot Set Up Issues 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-issues-2019.md
New-RdsRoleAssignment -TenantName <Azure Virtual Desktop tenant name> -RoleDefin
### Error: User requires Microsoft Entra multifactor authentication (MFA) Example of raw error:
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 08/14/2024 Last updated : 08/21/2024 # What's new in the Remote Desktop client for Windows
virtual-wan Create Bgp Peering Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-powershell.md
Previously updated : 11/21/2023 Last updated : 08/13/2024 # Configure BGP peering to an NVA - PowerShell
virtual-wan Expressroute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/expressroute-powershell.md
Previously updated : 11/21/2023 Last updated : 08/13/2024
virtual-wan How To Virtual Hub Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-preference-powershell.md
Previously updated : 11/21/2023 Last updated : 08/13/2024 # Configure virtual hub routing preference - Azure PowerShell
virtual-wan Nat Rules Vpn Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway-powershell.md
Title: 'Configure VPN NAT rules for your gateway using PowerShell'
-description: Learn how to configure NAT rules for your VWAN VPN gateway using PowerShell.
+description: Learn how to configure NAT rules for your Virtual WAN VPN gateway using PowerShell.
Previously updated : 08/24/2023 Last updated : 08/14/2024
virtual-wan Virtual Wan Route Table Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva.md
Previously updated : 08/24/2023 Last updated : 08/14/2024 # Customer intent: As someone with a networking background, I want to work with routing tables for NVA.
vpn-gateway Vpn Gateway Classic Resource Manager Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md
+
+ Title: Migrate VPN gateways from Classic to Resource Manager
+
+description: Learn about migrating VPN Gateway resources from the classic deployment model to the Resource Manager deployment model.
++++ Last updated : 08/21/2024+++
+# VPN Gateway classic to Resource Manager migration
+
+VPN gateways can now be migrated from the classic deployment model to [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). For more information, see [Resource Manager deployment model](../azure-resource-manager/management/overview.md). In this article, we discuss how to migrate from classic deployments to the Resource Manager model.
+
+> [!IMPORTANT]
+> [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)]
+
+VPN gateways are migrated as part of VNet migration from classic to Resource Manager. This migration is done one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to migrate. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](../virtual-machines/migration-classic-resource-manager-ps.md).
+
+There isn't a data path downtime during migration and thus existing workloads continue to function without the loss of on-premises connectivity during migration. The public IP address associated with the VPN gateway doesn't change during the migration process. This implies that you won't need to reconfigure your on-premises router once the migration is completed.
+
+The Resource Manager model is different from the classic model and is composed of virtual network gateways, local network gateways and connection resources. These represent the VPN gateway itself, the local-site representing on premises address space and connectivity between the two respectively. Once migration is completed, your gateways won't be available in the classic model and all management operations on virtual network gateways, local network gateways, and connection objects must be performed using the Resource Manager model.
+
+## Supported scenarios
+
+Most common VPN connectivity scenarios are covered by classic to Resource Manager migration. The supported scenarios include:
+
+* Point-to-site connectivity
+* Site-to-site connectivity with VPN Gateway connected to on premises location
+* VNet-to-VNet connectivity between two VNets using VPN gateways
+* Multiple VNets connected to same on-premises location
+* Multi-site connectivity
+* Forced tunneling enabled VNets
+
+Scenarios that aren't supported include:
+
+* VNet with both an ExpressRoute gateway and a VPN gateway isn't currently supported.
+* Transit scenarios where VM extensions are connected to on-premises servers. Transit VPN connectivity limitations are detailed in the next sections.
+
+> [!NOTE]
+> CIDR validation in the Resource Manager model is stricter than the one in the classic model. Before migrating, ensure that classic address ranges given conform to valid CIDR format before beginning the migration. CIDR can be validated using any common CIDR validators. VNet or local sites with invalid CIDR ranges when migrated result in a failed state.
+>
+
+## VNet-to-VNet connectivity migration
+
+VNet-to-VNet connectivity in the classic deployment model was achieved by creating a local site representation of the connected VNet. Customers were required to create two local sites that represented the two VNets which needed to be connected together. These were then connected to the corresponding VNets using IPsec tunnel to establish connectivity between the two VNets. This model has manageability challenges, since any address range changes in one VNet must also be maintained in the corresponding local site representation. In the Resource Manager model, this workaround is no longer needed. The connection between the two VNets can be directly achieved using 'Vnet2Vnet' connection type in the Connection resource.
++
+During VNet migration, we detect that the connected entity to the current VNet's VPN gateway is another VNet. We ensure that once migration of both VNets is completed, you no longer see two local sites representing the other VNet. The classic model of two VPN gateways, two local sites, and two connections between them is transformed to the Resource Manager model with two VPN gateways and two connections of type Vnet2Vnet.
+
+## Transit VPN connectivity
+
+You can configure VPN gateways in a topology such that on-premises connectivity for a VNet is achieved by connecting to another VNet that is directly connected to on-premises. This is transit VPN connectivity, where instances in first VNet are connected to on-premises resources via transit to the VPN gateway in the connected VNet that's directly connected to on-premises. To achieve this configuration in classic deployment model, you need to create a local site that has aggregated prefixes representing both the connected VNet and on-premises address space. This representational local site is then connected to the VNet to achieve transit connectivity. The classic model also has similar manageability challenges since any change in the on-premises address range must also be maintained on the local site representing the aggregate of VNet and on-premises. Introduction of BGP support in Resource Manager supported gateways simplifies manageability, since the connected gateways can learn routes from on-premises without manual modification to prefixes.
++
+Since we transform VNet-to-VNet connectivity without requiring local sites, the transit scenario loses on-premises connectivity for the VNet that is indirectly connected to on-premises. The loss of connectivity can be mitigated in the following two ways, after migration is completed:
+
+* Enable BGP on VPN gateways that are connected together and to the on-premises location. Enabling BGP restores connectivity without any other configuration changes since routes are learned and advertised between VNet gateways. Note that the BGP option is only available on Standard and higher SKUs.
+* Establish an explicit connection from affected VNet to the local network gateway that represents the on-premises location. This would also require changing configuration on the on-premises router to create and configure the IPsec tunnel.
+
+## Next steps
+
+After learning about VPN gateway migration support, go to [platform-supported migration of IaaS resources from classic to Resource Manager](../virtual-machines/migration-classic-resource-manager-ps.md) to get started.