Updates from: 04/09/2024 01:12:58
Service Microsoft Docs article Related commit history on GitHub Change details
advisor Advisor Resiliency Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-resiliency-reviews.md
You can manage access to Advisor personalized recommendations using the followin
| **Name** | **Description** | ||::| |Subscription Reader|View reviews for a workload and recommendations linked to them.|
-|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage review recommendation lifecycle.|
-|Advisor Recommendations Contributor (Assessments and Reviews)|View review recommendations, accept review recommendations, manage review recommendations' lifecycle.|
+|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage the recommendation lifecycle.|
+|Advisor Recommendations Contributor (Assessments and Reviews)|View accepted recommendations, and manage the recommendation lifecycle.|
You can find detailed instructions on how to assign a role using the Azure portal - [Assign Azure roles using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition). Additional information is available in [Steps to assign an Azure role - Azure RBAC](/azure/role-based-access-control/role-assignments-steps).
ai-services Quickstart Groundedness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-groundedness.md
Follow this guide to use Azure AI Content Safety Groundedness detection to check
## Check groundedness without reasoning
-In the simple case without the _reasoning_ feature, the Groundedness detection API classifies the ungroundedness of the submitted content as `true` or `false` and provides a confidence score.
+In the simple case without the _reasoning_ feature, the Groundedness detection API classifies the ungroundedness of the submitted content as `true` or `false`.
#### [cURL](#tab/curl)
Create a new Python file named _quickstart.py_. Open the new file in your prefer
-> [!TIP]
-> To test a summarization task instead of a question answering (QnA) task, use the following sample JSON body:
->
-> ```json
-> {
-> "Domain": "Medical",
-> "Task": "Summarization",
-> "Text": "Ms Johnson has been in the hospital after experiencing a stroke.",
-> "GroundingSources": ["Our patient, Ms. Johnson, presented with persistent fatigue, unexplained weight loss, and frequent night sweats. After a series of tests, she was diagnosed with HodgkinΓÇÖs lymphoma, a type of cancer that affects the lymphatic system. The diagnosis was confirmed through a lymph node biopsy revealing the presence of Reed-Sternberg cells, a characteristic of this disease. She was further staged using PET-CT scans. Her treatment plan includes chemotherapy and possibly radiation therapy, depending on her response to treatment. The medical team remains optimistic about her prognosis given the high cure rate of HodgkinΓÇÖs lymphoma."],
-> "Reasoning": false
-> }
-> ```
+To test a summarization task instead of a question answering (QnA) task, use the following sample JSON body:
+```json
+{
+ "domain": "Medical",
+ "task": "Summarization",
+ "text": "Ms Johnson has been in the hospital after experiencing a stroke.",
+ "groundingSources": ["Our patient, Ms. Johnson, presented with persistent fatigue, unexplained weight loss, and frequent night sweats. After a series of tests, she was diagnosed with HodgkinΓÇÖs lymphoma, a type of cancer that affects the lymphatic system. The diagnosis was confirmed through a lymph node biopsy revealing the presence of Reed-Sternberg cells, a characteristic of this disease. She was further staged using PET-CT scans. Her treatment plan includes chemotherapy and possibly radiation therapy, depending on her response to treatment. The medical team remains optimistic about her prognosis given the high cure rate of HodgkinΓÇÖs lymphoma."],
+ "reasoning": false
+}
+```
The following fields must be included in the URL:
The parameters in the request body are defined in this table:
| - `query` | (Optional) This represents the question in a QnA task. Character limit: 7,500. | String | | **text** | (Required) The LLM output text to be checked. Character limit: 7,500. | String | | **groundingSources** | (Required) Uses an array of grounding sources to validate AI-generated text. Up to 55,000 characters of grounding sources can be analyzed in a single request. | String array |
-| **reasoning** | (Optional) Specifies whether to use the reasoning feature. The default value is `false`. If `true`, you need to bring your own Azure OpenAI resources to provide an explanation. Be careful: using reasoning increases the processing time and incurs extra fees.| Boolean |
+| **reasoning** | (Optional) Specifies whether to use the reasoning feature. The default value is `false`. If `true`, you need to bring your own Azure OpenAI GPT-4 Turbo resources to provide an explanation. Be careful: using reasoning increases the processing time.| Boolean |
### Interpret the API response
The JSON objects in the output are defined here:
| Name | Description | Type | | : | :-- | - | | **ungroundedDetected** | Indicates whether the text exhibits ungroundedness. | Boolean |
-| **confidenceScore** | The confidence value of the _ungrounded_ designation. The score ranges from 0 to 1. | Float |
| **ungroundedPercentage** | Specifies the proportion of the text identified as ungrounded, expressed as a number between 0 and 1, where 0 indicates no ungrounded content and 1 indicates entirely ungrounded content.| Float | | **ungroundedDetails** | Provides insights into ungrounded content with specific examples and percentages.| Array |
-| -**`Text`** | The specific text that is ungrounded. | String |
+| -**`text`** | The specific text that is ungrounded. | String |
## Check groundedness with reasoning
The Groundedness detection API provides the option to include _reasoning_ in the
### Bring your own GPT deployment
-In order to use your Azure OpenAI resource to enable the reasoning feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource:
+> [!TIP]
+> At the moment, we only support **Azure OpenAI GPT-4 Turbo** resources and do not support other GPT types. Your GPT-4 Turbo resources can be deployed in any region; however, we recommend that they be located in the same region as the content safety resources to minimize potential latency.
+
+In order to use your Azure OpenAI GPT4-Turbo resource to enable the reasoning feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource:
1. Enable Managed Identity for Azure AI Content Safety.
In order to use your Azure OpenAI resource to enable the reasoning feature, use
### Make the API request
-In your request to the Groundedness detection API, set the `"Reasoning"` body parameter to `true`, and provide the other needed parameters:
+In your request to the Groundedness detection API, set the `"reasoning"` body parameter to `true`, and provide the other needed parameters:
```json {
The parameters in the request body are defined in this table:
| **text** | (Required) The LLM output text to be checked. Character limit: 7,500. | String | | **groundingSources** | (Required) Uses an array of grounding sources to validate AI-generated text. Up to 55,000 characters of grounding sources can be analyzed in a single request. | String array | | **reasoning** | (Optional) Set to `true`, the service uses Azure OpenAI resources to provide an explanation. Be careful: using reasoning increases the processing time and incurs extra fees.| Boolean |
-| **llmResource** | (Optional) If you want to use your own Azure OpenAI resources instead of our default GPT resources, add this field and include the subfields for the resources used. If you don't want to use your own resources, remove this field from the input. | String |
-| - `resourceType `| Specifies the type of resource being used. Currently it only allows `AzureOpenAI`. | Enum|
+| **llmResource** | (Required) If you want to use your own Azure OpenAI GPT4-Turbo resource to enable reasoning, add this field and include the subfields for the resources used. | String |
+| - `resourceType `| Specifies the type of resource being used. Currently it only allows `AzureOpenAI`. We only support Azure OpenAI GPT-4 Turbo resources and do not support other GPT types. Your GPT-4 Turbo resources can be deployed in any region; however, we recommend that they be located in the same region as the content safety resources to minimize potential latency. | Enum|
| - `azureOpenAIEndpoint `| Your endpoint URL for Azure OpenAI service. | String | | - `azureOpenAIDeploymentName` | The name of the specific GPT deployment to use. | String|
The JSON objects in the output are defined here:
| Name | Description | Type | | : | :-- | - | | **ungroundedDetected** | Indicates whether the text exhibits ungroundedness. | Boolean |
-| **confidenceScore** | The confidence value of the _ungrounded_ designation. The score ranges from 0 to 1. | Float |
| **ungroundedPercentage** | Specifies the proportion of the text identified as ungrounded, expressed as a number between 0 and 1, where 0 indicates no ungrounded content and 1 indicates entirely ungrounded content.| Float | | **ungroundedDetails** | Provides insights into ungrounded content with specific examples and percentages.| Array |
-| -**`Text`** | The specific text that is ungrounded. | String |
+| -**`text`** | The specific text that is ungrounded. | String |
| -**`offset`** | An object describing the position of the ungrounded text in various encoding. | String | | - `offset > utf8` | The offset position of the ungrounded text in UTF-8 encoding. | Integer | | - `offset > utf16` | The offset position of the ungrounded text in UTF-16 encoding. | Integer |
The JSON objects in the output are defined here:
| - `length > utf8` | The length of the ungrounded text in UTF-8 encoding. | Integer | | - `length > utf16` | The length of the ungrounded text in UTF-16 encoding. | Integer | | - `length > codePoint` | The length of the ungrounded text in terms of Unicode code points. |Integer |
-| -**`Reason`** | Offers explanations for detected ungroundedness. | String |
+| -**`reason`** | Offers explanations for detected ungroundedness. | String |
## Clean up resources
ai-services Customizing Llms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/customizing-llms.md
+
+ Title: Azure OpenAI Service getting started with customizing a large language model (LLM)
+
+description: Learn more about the concepts behind customizing an LLM with Azure OpenAI.
+ Last updated : 03/26/2024++++
+recommendations: false
++
+# Getting started with customizing a large language model (LLM)
+
+There are several techniques for adapting a pre-trained language model to suit a specific task or domain. These include prompt engineering, RAG (Retrieval Augmented Generation), and fine-tuning. These three techniques are not mutually exclusive but are complementary methods that in combination can be applicable to a specific use case. In this article, we'll explore these techniques, illustrative use cases, things to consider, and provide links to resources to learn more and get started with each.
+
+## Prompt engineering
+
+### Definition
+
+[Prompt engineering](./prompt-engineering.md) is a technique that is both art and science, which involves designing prompts for generative AI models. This process utilizes in-context learning ([zero shot and few shot](./prompt-engineering.md#examples)) and, with iteration, improves accuracy and relevancy in responses, optimizing the performance of the model.
+
+### Illustrative use cases
+
+A Marketing Manager at an environmentally conscious company can use prompt engineering to help guide the model to generate descriptions that are more aligned with their brandΓÇÖs tone and style. For instance, they can add a prompt like "Write a product description for a new line of eco-friendly cleaning products that emphasizes quality, effectiveness, and highlights the use of environmentally friendly ingredients" to the input. This will help the model generate descriptions that are aligned with their brandΓÇÖs values and messaging.
+
+### Things to consider
+
+- **Prompt engineering** is the starting point for generating desired output from generative AI models.
+
+- **Craft clear instructions**: Instructions are commonly used in prompts and guide the model's behavior. Be specific and leave as little room for interpretation as possible. Use analogies and descriptive language to help the model understand your desired outcome.
+
+- **Experiment and iterate**: Prompt engineering is an art that requires experimentation and iteration. Practice and gain experience in crafting prompts for different tasks. Every model might behave differently, so it's important to adapt prompt engineering techniques accordingly.
+
+### Getting started
+
+- [Introduction to prompt engineering](./prompt-engineering.md)
+- [Prompt engineering techniques](./advanced-prompt-engineering.md)
+- [15 tips to become a better prompt engineer for generative AI](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/15-tips-to-become-a-better-prompt-engineer-for-generative-ai/ba-p/3882935)
+- [The basics of prompt engineering (video)](https://www.youtube.com/watch?v=e7w6QV1NX1c)
+
+## RAG (Retrieval Augmented Generation)
+
+### Definition
+
+[RAG (Retrieval Augmented Generation)](../../../ai-studio/concepts/retrieval-augmented-generation.md) is a method that integrates external data into a Large Language Model prompt to generate relevant responses. This approach is particularly beneficial when using a large corpus of unstructured text based on different topics. It allows for answers to be grounded in the organizationΓÇÖs knowledge base (KB), providing a more tailored and accurate response.
+
+RAG is also advantageous when answering questions based on an organizationΓÇÖs private data or when the public data that the model was trained on might have become outdated. This helps ensure that the responses are always up-to-date and relevant, regardless of the changes in the data landscape.
+
+### Illustrative use case
+
+A corporate HR department is looking to provide an intelligent assistant that answers specific employee health insurance related questions such as "are eyeglasses covered?" RAG is used to ingest the extensive and numerous documents associated with insurance plan policies to enable the answering of these specific types of questions.
+
+### Things to consider
+
+- RAG helps ground AI output in real-world data and reduces the likelihood of fabrication.
+
+- RAG is helpful when there is a need to answer questions based on private proprietary data.
+
+- RAG is helpful when you might want questions answered that are recent (for example, before the cutoff date of when the [model version](./models.md) was last trained).
+
+### Getting started
+
+- [Retrieval Augmented Generation in Azure AI Studio - Azure AI Studio | Microsoft Learn](../../../ai-studio/concepts/retrieval-augmented-generation.md)
+- [Retrieval Augmented Generation (RAG) in Azure AI Search](../../../search/retrieval-augmented-generation-overview.md)
+- [Retrieval Augmented Generation using Azure Machine Learning prompt flow (preview)](../../../machine-learning/concept-retrieval-augmented-generation.md)
+
+## Fine-tuning
+
+### Definition
+
+[Fine-tuning](../how-to/fine-tuning.md), specifically [supervised fine-tuning](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/fine-tuning-now-available-with-azure-openai-service/ba-p/3954693?lightbox-message-images-3954693=516596iC5D02C785903595A) in this context, is an iterative process that adapts an existing large language model to a provided training set in order to improve performance, teach the model new skills, or reduce latency. This approach is used when the model needs to learn and generalize over specific topics, particularly when these topics are generally small in scope.
+
+Fine-tuning requires the use of high-quality training data, in a [special example based format](../how-to/fine-tuning.md#example-file-format), to create the new fine-tuned Large Language Model. By focusing on specific topics, fine-tuning allows the model to provide more accurate and relevant responses within those areas of focus.
+
+### Illustrative use case
+
+An IT department has been using GPT-4 to convert natural language queries to SQL, but they have found that the responses are not always reliably grounded in their schema, and the cost is prohibitively high.
+
+They fine-tune GPT-3.5-Turbo with hundreds of requests and correct responses and produce a model that performs better than the base model with lower costs and latency.
+
+### Things to consider
+
+- Fine-tuning is an advanced capability; it enhances LLM with after-cutoff-date knowledge and/or domain specific knowledge. Start by evaluating the baseline performance of a standard model against their requirements before considering this option.
+
+- Having a baseline for performance without fine-tuning is essential for knowing whether fine-tuning has improved model performance. Fine-tuning with bad data makes the base model worse, but without a baseline, it's hard to detect regressions.
+
+- Good cases for fine-tuning include steering the model to output content in a specific and customized style, tone, or format, or tasks where the information needed to steer the model is too long or complex to fit into the prompt window.
+
+- Fine-tuning costs:
+
+ - Fine-tuning can reduce costs across two dimensions: (1) by using fewer tokens depending on the task (2) by using a smaller model (for example GPT 3.5 Turbo can potentially be fine-tuned to achieve the same quality of GPT-4 on a particular task).
+
+ - Fine-tuning has upfront costs for training the model. And additional hourly costs for hosting the custom model once it's deployed.
+
+### Getting started
+
+- [When to use Azure OpenAI fine-tuning](./fine-tuning-considerations.md)
+- [Customize a model with fine-tuning](../how-to/fine-tuning.md)
+- [Azure OpenAI GPT 3.5 Turbo fine-tuning tutorial](../tutorials/fine-tune.md)
+- [To fine-tune or not to fine-tune? (Video)](https://www.youtube.com/watch?v=0Jo-z-MFxJs)
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 03/14/2024 Last updated : 04/04/2024
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
**<sup>1</sup>** This model will accept requests > 4,096 tokens. It is not recommended to exceed the 4,096 input token limit as the newer version of the model are capped at 4,096 tokens. If you encounter issues when exceeding 4,096 input tokens with this model this configuration is not officially supported.
+#### Azure Government regions
+
+The following GPT-3.5 turbo models are available with [Azure Government](/azure/azure-government/documentation-government-welcome):
+
+|Model ID | Model Availability |
+|--|--|
+| `gpt-35-turbo` (1106-Preview) | US Gov Virginia |
+ ### Embeddings models These models can only be used with Embedding API requests.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 02/26/2024 Last updated : 04/08/2024 recommendations: false
There's an [upload limit](../quotas-limits.md), and there are some caveats about
## Supported data sources
-You need to connect to a data source to upload your data. When you want to use your data to chat with an Azure OpenAI model, your data is chunked in a search index so that relevant data can be found based on user queries. For some data sources such as uploading files from your local machine (preview) or data contained in a blob storage account (preview), Azure AI Search is used.
+You need to connect to a data source to upload your data. When you want to use your data to chat with an Azure OpenAI model, your data is chunked in a search index so that relevant data can be found based on user queries.
-When you choose the following data sources, your data is ingested into an Azure AI Search index.
+The [Integrated Vector Database in Azure Cosmos DB for MongoDB](/azure/cosmos-db/mongodb/vcore/vector-search) natively supports integration with Azure OpenAI On Your Data.
+
+For some data sources such as uploading files from your local machine (preview) or data contained in a blob storage account (preview), Azure AI Search is used. When you choose the following data sources, your data is ingested into an Azure AI Search index.
+
+>[!TIP]
+>If you use Azure Cosmos DB (except for its vCore-based API for MongoDB), you may be eligible for the [Azure AI Advantage offer](/azure/cosmos-db/ai-advantage), which provides the equivalent of up to $6,000 in Azure Cosmos DB throughput credits.
|Data source | Description | ||| | [Azure AI Search](/azure/search/search-what-is-azure-search) | Use an existing Azure AI Search index with Azure OpenAI On Your Data. |
+| [Azure Cosmos DB](/azure/cosmos-db/introduction) | Azure Cosmos DB's API for Postgres and vCore-based API for MongoDB have natively integrated vector indexing and do not require Azure AI Search; however, its other APIs do require Azure AI Search for vector indexing. Azure Cosmos DB for NoSQL will offer a natively integrated vector database by mid-2024. |
|Upload files (preview) | Upload files from your local machine to be stored in an Azure Blob Storage database, and ingested into Azure AI Search. | |URL/Web address (preview) | Web content from the URLs is stored in Azure Blob Storage. | |Azure Blob Storage (preview) | Upload files from Azure Blob Storage to be ingested into an Azure AI Search index. |
If you want to implement additional value-based criteria for query execution, yo
[!INCLUDE [ai-search-ingestion](../includes/ai-search-ingestion.md)]
-# [Azure Cosmos DB for MongoDB vCore](#tab/mongo-db)
+# [Vector Database in Azure Cosmos DB for MongoDB vCore](#tab/mongo-db)
### Prerequisites * [Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/introduction) account
If you want to implement additional value-based criteria for query execution, yo
### Limitations * Only Azure Cosmos DB for MongoDB vCore is supported.
-* The search type is limited to [Azure Cosmos DB for MongoDB vCore vector search](/azure/cosmos-db/mongodb/vcore/vector-search) with an Azure OpenAI embedding model.
+* The search type is limited to [Integrated Vector Database in Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/vector-search) with an Azure OpenAI embedding model.
* This implementation works best on unstructured and spatial data. ### Data preparation
You can modify the following additional settings in the **Data parameters** sect
|**Retrieved documents** | This parameter is an integer that can be set to 3, 5, 10, or 20, and controls the number of document chunks provided to the large language model for formulating the final response. By default, this is set to 5. The search process can be noisy and sometimes, due to chunking, relevant information might be spread across multiple chunks in the search index. Selecting a top-K number, like 5, ensures that the model can extract relevant information, despite the inherent limitations of search and chunking. However, increasing the number too high can potentially distract the model. Additionally, the maximum number of documents that can be effectively used depends on the version of the model, as each has a different context size and capacity for handling documents. If you find that responses are missing important context, try increasing this parameter. This is the `topNDocuments` parameter in the API, and is 5 by default. | | **Strictness** | Determines the system's aggressiveness in filtering search documents based on their similarity scores. The system queries Azure Search or other document stores, then decides which documents to provide to large language models like ChatGPT. Filtering out irrelevant documents can significantly enhance the performance of the end-to-end chatbot. Some documents are excluded from the top-K results if they have low similarity scores before forwarding them to the model. This is controlled by an integer value ranging from 1 to 5. Setting this value to 1 means that the system will minimally filter documents based on search similarity to the user query. Conversely, a setting of 5 indicates that the system will aggressively filter out documents, applying a very high similarity threshold. If you find that the chatbot omits relevant information, lower the filter's strictness (set the value closer to 1) to include more documents. Conversely, if irrelevant documents distract the responses, increase the threshold (set the value closer to 5). This is the `strictness` parameter in the API, and set to 3 by default. |
+### Uncited references
+
+It's possible for the model to return `"TYPE":"UNCITED_REFERENCE"` instead of `"TYPE":CONTENT` in the API for documents that are retrieved from the data source, but not included in the citation. This can be useful for debugging, and you can control this behavior by modifying the **strictness** and **retrieved documents** runtime parameters described above.
+ ### System message You can define a system message to steer the model's reply when using Azure OpenAI On Your Data. This message allows you to customize your replies on top of the retrieval augmented generation (RAG) pattern that Azure OpenAI On Your Data uses. The system message is used in addition to an internal base prompt to provide the experience. To support this, we truncate the system message after a specific [number of tokens](#token-usage-estimation-for-azure-openai-on-your-data) to ensure the model can answer questions using your data. If you are defining extra behavior on top of the default experience, ensure that your system prompt is detailed and explains the exact expected customization.
token_output = TokenEstimator.estimate_tokens(input_text)
## Troubleshooting
-### Failed ingestion jobs
-
-To troubleshoot a failed job, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings:
+To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings:
+### Failed ingestion jobs
**Quota Limitations Issues**
Resolution:
This means the storage account isn't accessible with the given credentials. In this case, please review the storage account credentials passed to the API and ensure the storage account isn't hidden behind a private endpoint (if a private endpoint isn't configured for this resource).
+### 503 errors when sending queries with Azure AI Search
+
+Each user message can translate to multiple search queries, all of which get sent to the search resource in parallel. This can produce throttling behavior when the amount of search replicas and partitions is low. The maximum number of queries per second that a single partition and single replica can support may not be sufficient. In this case, consider increasing your replicas and partitions, or adding sleep/retry logic in your application. See the [Azure AI Search documentation](../../../search/performance-benchmarks.md) for more information.
+ ## Regional availability and model support You can use Azure OpenAI On Your Data with an Azure OpenAI resource in the following regions:
ai-services Chat Markup Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/chat-markup-language.md
+
+ Title: How to work with the Chat Markup Language (preview)
+
+description: Learn how to work with Chat Markup Language (preview)
++++ Last updated : 04/05/2024+
+keywords: ChatGPT
++
+# Chat Markup Language ChatML (Preview)
+
+> [!IMPORTANT]
+> Using GPT-3.5-Turbo models with the completion endpoint as described in this article remains in preview and is only possible with `gpt-35-turbo` version (0301) which is [slated for retirement as early as June 13th, 2024](../concepts/model-retirements.md#current-models). We strongly recommend using the [GA Chat Completion API/endpoint](./chatgpt.md). The Chat Completion API is the recommended method of interacting with the GPT-3.5-Turbo models. The Chat Completion API is also the only way to access the GPT-4 models.
+
+The following code snippet shows the most basic way to use the GPT-3.5-Turbo models with ChatML. If this is your first time using these models programmatically we recommend starting with our [GPT-35-Turbo & GPT-4 Quickstart](../chatgpt-quickstart.md).
+
+> [!NOTE]
+> In the Azure OpenAI documentation we refer to GPT-3.5-Turbo, and GPT-35-Turbo interchangeably. The official name of the model on OpenAI is `gpt-3.5-turbo`, but for Azure OpenAI due to Azure specific character constraints the underlying model name is `gpt-35-turbo`.
+
+```python
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = "https://{your-resource-name}.openai.azure.com/"
+openai.api_version = "2024-02-01"
+openai.api_key = os.getenv("OPENAI_API_KEY")
+
+response = openai.Completion.create(
+ engine="gpt-35-turbo", # The deployment name you chose when you deployed the GPT-35-Turbo model
+ prompt="<|im_start|>system\nAssistant is a large language model trained by OpenAI.\n<|im_end|>\n<|im_start|>user\nWho were the founders of Microsoft?\n<|im_end|>\n<|im_start|>assistant\n",
+ temperature=0,
+ max_tokens=500,
+ top_p=0.5,
+ stop=["<|im_end|>"])
+
+print(response['choices'][0]['text'])
+```
+
+> [!NOTE]
+> The following parameters aren't available with the gpt-35-turbo model: `logprobs`, `best_of`, and `echo`. If you set any of these parameters, you'll get an error.
+
+The `<|im_end|>` token indicates the end of a message. When using ChatML it is recommended to include `<|im_end|>` token as a stop sequence to ensure that the model stops generating text when it reaches the end of the message.
+
+Consider setting `max_tokens` to a slightly higher value than normal such as 300 or 500. This ensures that the model doesn't stop generating text before it reaches the end of the message.
+
+## Model versioning
+
+> [!NOTE]
+> `gpt-35-turbo` is equivalent to the `gpt-3.5-turbo` model from OpenAI.
+
+Unlike previous GPT-3 and GPT-3.5 models, the `gpt-35-turbo` model as well as the `gpt-4` and `gpt-4-32k` models will continue to be updated. When creating a [deployment](../how-to/create-resource.md#deploy-a-model) of these models, you'll also need to specify a model version.
+
+You can find the model retirement dates for these models on our [models](../concepts/models.md) page.
+
+## Working with Chat Markup Language (ChatML)
+
+> [!NOTE]
+> OpenAI continues to improve the GPT-35-Turbo and the Chat Markup Language used with the models will continue to evolve in the future. We'll keep this document updated with the latest information.
+
+OpenAI trained GPT-35-Turbo on special tokens that delineate the different parts of the prompt. The prompt starts with a system message that is used to prime the model followed by a series of messages between the user and the assistant.
+
+The format of a basic ChatML prompt is as follows:
+
+```
+<|im_start|>system
+Provide some context and/or instructions to the model.
+<|im_end|>
+<|im_start|>user
+The userΓÇÖs message goes here
+<|im_end|>
+<|im_start|>assistant
+```
+
+### System message
+
+The system message is included at the beginning of the prompt between the `<|im_start|>system` and `<|im_end|>` tokens. This message provides the initial instructions to the model. You can provide various information in the system message including:
+
+* A brief description of the assistant
+* Personality traits of the assistant
+* Instructions or rules you would like the assistant to follow
+* Data or information needed for the model, such as relevant questions from an FAQ
+
+You can customize the system message for your use case or just include a basic system message. The system message is optional, but it's recommended to at least include a basic one to get the best results.
+
+### Messages
+
+After the system message, you can include a series of messages between the **user** and the **assistant**. Each message should begin with the `<|im_start|>` token followed by the role (`user` or `assistant`) and end with the `<|im_end|>` token.
+
+```
+<|im_start|>user
+What is thermodynamics?
+<|im_end|>
+```
+
+To trigger a response from the model, the prompt should end with `<|im_start|>assistant` token indicating that it's the assistant's turn to respond. You can also include messages between the user and the assistant in the prompt as a way to do few shot learning.
+
+### Prompt examples
+
+The following section shows examples of different styles of prompts that you could use with the GPT-35-Turbo and GPT-4 models. These examples are just a starting point, and you can experiment with different prompts to customize the behavior for your own use cases.
+
+#### Basic example
+
+If you want the GPT-35-Turbo and GPT-4 models to behave similarly to [chat.openai.com](https://chat.openai.com/), you can use a basic system message like "Assistant is a large language model trained by OpenAI."
+
+```
+<|im_start|>system
+Assistant is a large language model trained by OpenAI.
+<|im_end|>
+<|im_start|>user
+Who were the founders of Microsoft?
+<|im_end|>
+<|im_start|>assistant
+```
+
+#### Example with instructions
+
+For some scenarios, you might want to give additional instructions to the model to define guardrails for what the model is able to do.
+
+```
+<|im_start|>system
+Assistant is an intelligent chatbot designed to help users answer their tax related questions.
+
+Instructions:
+- Only answer questions related to taxes.
+- If you're unsure of an answer, you can say "I don't know" or "I'm not sure" and recommend users go to the IRS website for more information.
+<|im_end|>
+<|im_start|>user
+When are my taxes due?
+<|im_end|>
+<|im_start|>assistant
+```
+
+#### Using data for grounding
+
+You can also include relevant data or information in the system message to give the model extra context for the conversation. If you only need to include a small amount of information, you can hard code it in the system message. If you have a large amount of data that the model should be aware of, you can use [embeddings](../tutorials/embeddings.md?tabs=command-line) or a product like [Azure AI Search](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087) to retrieve the most relevant information at query time.
+
+```
+<|im_start|>system
+Assistant is an intelligent chatbot designed to help users answer technical questions about Azure OpenAI Serivce. Only answer questions using the context below and if you're not sure of an answer, you can say "I don't know".
+
+Context:
+- Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series.
+- Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
+- At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating MicrosoftΓÇÖs principles for responsible AI use
+<|im_end|>
+<|im_start|>user
+What is Azure OpenAI Service?
+<|im_end|>
+<|im_start|>assistant
+```
+
+#### Few shot learning with ChatML
+
+You can also give few shot examples to the model. The approach for few shot learning has changed slightly because of the new prompt format. You can now include a series of messages between the user and the assistant in the prompt as few shot examples. These examples can be used to seed answers to common questions to prime the model or teach particular behaviors to the model.
+
+This is only one example of how you can use few shot learning with GPT-35-Turbo. You can experiment with different approaches to see what works best for your use case.
+
+```
+<|im_start|>system
+Assistant is an intelligent chatbot designed to help users answer their tax related questions.
+<|im_end|>
+<|im_start|>user
+When do I need to file my taxes by?
+<|im_end|>
+<|im_start|>assistant
+In 2023, you will need to file your taxes by April 18th. The date falls after the usual April 15th deadline because April 15th falls on a Saturday in 2023. For more details, see https://www.irs.gov/filing/individuals/when-to-file
+<|im_end|>
+<|im_start|>user
+How can I check the status of my tax refund?
+<|im_end|>
+<|im_start|>assistant
+You can check the status of your tax refund by visiting https://www.irs.gov/refunds
+<|im_end|>
+```
+
+#### Using Chat Markup Language for non-chat scenarios
+
+ChatML is designed to make multi-turn conversations easier to manage, but it also works well for non-chat scenarios.
+
+For example, for an entity extraction scenario, you might use the following prompt:
+
+```
+<|im_start|>system
+You are an assistant designed to extract entities from text. Users will paste in a string of text and you will respond with entities you've extracted from the text as a JSON object. Here's an example of your output format:
+{
+ "name": "",
+ "company": "",
+ "phone_number": ""
+}
+<|im_end|>
+<|im_start|>user
+Hello. My name is Robert Smith. IΓÇÖm calling from Contoso Insurance, Delaware. My colleague mentioned that you are interested in learning about our comprehensive benefits policy. Could you give me a call back at (555) 346-9322 when you get a chance so we can go over the benefits?
+<|im_end|>
+<|im_start|>assistant
+```
++
+## Preventing unsafe user inputs
+
+It's important to add mitigations into your application to ensure safe use of the Chat Markup Language.
+
+We recommend that you prevent end-users from being able to include special tokens in their input such as `<|im_start|>` and `<|im_end|>`. We also recommend that you include additional validation to ensure the prompts you're sending to the model are well formed and follow the Chat Markup Language format as described in this document.
+
+You can also provide instructions in the system message to guide the model on how to respond to certain types of user inputs. For example, you can instruct the model to only reply to messages about a certain subject. You can also reinforce this behavior with few shot examples.
++
+## Managing conversations
+
+The token limit for `gpt-35-turbo` is 4096 tokens. This limit includes the token count from both the prompt and completion. The number of tokens in the prompt combined with the value of the `max_tokens` parameter must stay under 4096 or you'll receive an error.
+
+ItΓÇÖs your responsibility to ensure the prompt and completion falls within the token limit. This means that for longer conversations, you need to keep track of the token count and only send the model a prompt that falls within the token limit.
+
+The following code sample shows a simple example of how you could keep track of the separate messages in the conversation.
+
+```python
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = "https://{your-resource-name}.openai.azure.com/" #This corresponds to your Azure OpenAI resource's endpoint value
+openai.api_version = "2024-02-01"
+openai.api_key = os.getenv("OPENAI_API_KEY")
+
+# defining a function to create the prompt from the system message and the conversation messages
+def create_prompt(system_message, messages):
+ prompt = system_message
+ for message in messages:
+ prompt += f"\n<|im_start|>{message['sender']}\n{message['text']}\n<|im_end|>"
+ prompt += "\n<|im_start|>assistant\n"
+ return prompt
+
+# defining the user input and the system message
+user_input = "<your user input>"
+system_message = f"<|im_start|>system\n{'<your system message>'}\n<|im_end|>"
+
+# creating a list of messages to track the conversation
+messages = [{"sender": "user", "text": user_input}]
+
+response = openai.Completion.create(
+ engine="gpt-35-turbo", # The deployment name you chose when you deployed the GPT-35-Turbo model.
+ prompt=create_prompt(system_message, messages),
+ temperature=0.5,
+ max_tokens=250,
+ top_p=0.9,
+ frequency_penalty=0,
+ presence_penalty=0,
+ stop=['<|im_end|>']
+)
+
+messages.append({"sender": "assistant", "text": response['choices'][0]['text']})
+print(response['choices'][0]['text'])
+```
+
+## Staying under the token limit
+
+The simplest approach to staying under the token limit is to remove the oldest messages in the conversation when you reach the token limit.
+
+You can choose to always include as many tokens as possible while staying under the limit or you could always include a set number of previous messages assuming those messages stay within the limit. It's important to keep in mind that longer prompts take longer to generate a response and incur a higher cost than shorter prompts.
+
+You can estimate the number of tokens in a string by using the [tiktoken](https://github.com/openai/tiktoken) Python library as shown below.
+
+```python
+import tiktoken
+
+cl100k_base = tiktoken.get_encoding("cl100k_base")
+
+enc = tiktoken.Encoding(
+ name="gpt-35-turbo",
+ pat_str=cl100k_base._pat_str,
+ mergeable_ranks=cl100k_base._mergeable_ranks,
+ special_tokens={
+ **cl100k_base._special_tokens,
+ "<|im_start|>": 100264,
+ "<|im_end|>": 100265
+ }
+)
+
+tokens = enc.encode(
+ "<|im_start|>user\nHello<|im_end|><|im_start|>assistant",
+ allowed_special={"<|im_start|>", "<|im_end|>"}
+)
+
+assert len(tokens) == 7
+assert tokens == [100264, 882, 198, 9906, 100265, 100264, 78191]
+```
+
+## Next steps
+
+* [Learn more about Azure OpenAI](../overview.md).
+* Get started with the GPT-35-Turbo model with [the GPT-35-Turbo & GPT-4 quickstart](../chatgpt-quickstart.md).
+* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples)
ai-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/chatgpt.md
Previously updated : 03/29/2024 Last updated : 04/05/2024 keywords: ChatGPT
-zone_pivot_groups: openai-chat
-# Learn how to work with the GPT-35-Turbo and GPT-4 models
+# Learn how to work with the GPT-3.5-Turbo and GPT-4 models
-The GPT-35-Turbo and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the GPT-35-Turbo and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
+The GPT-3.5-Turbo and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the GPT-3.5-Turbo and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
-In Azure OpenAI there are two different options for interacting with these type of models:
+This article walks you through getting started with the GPT-3.5-Turbo and GPT-4 models. It's important to use the techniques described here to get the best results. If you try to interact with the models the same way you did with the older model series, the models will often be verbose and provide less useful responses.
-- Chat Completion API.-- Completion API with Chat Markup Language (ChatML).-
-The Chat Completion API is a new dedicated API for interacting with the GPT-35-Turbo and GPT-4 models. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**.
-
-ChatML uses the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, it requires a unique token based prompt format known as Chat Markup Language (ChatML). This provides lower level access than the dedicated Chat Completion API, but also requires additional input validation, only supports gpt-35-turbo models, and **the underlying format is more likely to change over time**.
-
-This article walks you through getting started with the GPT-35-Turbo and GPT-4 models. It's important to use the techniques described here to get the best results. If you try to interact with the models the same way you did with the older model series, the models will often be verbose and provide less useful responses.
------
ai-services Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/monitoring.md
The following table summarizes the current subset of metrics available in Azure
|Metric|Category|Aggregation|Description|Dimensions| |||||| |`Azure OpenAI Requests`|HTTP|Count|Total number of calls made to the Azure OpenAI API over a period of time. Applies to PayGo, PTU, and PTU-managed SKUs.| `ApiName`, `ModelDeploymentName`,`ModelName`,`ModelVersion`, `OperationName`, `Region`, `StatusCode`, `StreamType`|
-| `Generated Completion Tokens` | Usage | Sum | Number of generated tokens (output) from an OpenAI model. Applies to PayGo, PTU, and PTU-manged SKUs | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
-| `Processed FineTuned Training Hours` | Usage |Sum| Number of Training Hours Processed on an OpenAI FineTuned Model | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
-| `Processed Inference Tokens` | Usage | Sum| Number of inference tokens processed by an OpenAI model. Calculated as prompt tokens (input) + generated tokens. Applies to PayGo, PTU, and PTU-manged SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
-| `Processed Prompt Tokens` | Usage | Sum | Total number of prompt tokens (input) processed on an OpenAI model. Applies to PayGo, PTU, and PTU-managed SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
+| `Generated Completion Tokens` | Usage | Sum | Number of generated tokens (output) from an Azure OpenAI model. Applies to PayGo, PTU, and PTU-manged SKUs | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
+| `Processed FineTuned Training Hours` | Usage |Sum| Number of training hours processed on an Azure OpenAI fine-tuned model. | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
+| `Processed Inference Tokens` | Usage | Sum| Number of inference tokens processed by an Azure OpenAI model. Calculated as prompt tokens (input) + generated tokens. Applies to PayGo, PTU, and PTU-manged SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
+| `Processed Prompt Tokens` | Usage | Sum | Total number of prompt tokens (input) processed on an Azure OpenAI model. Applies to PayGo, PTU, and PTU-managed SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
| `Provision-managed Utilization V2` | Usage | Average | Provision-managed utilization is the utilization percentage for a given provisioned-managed deployment. Calculated as (PTUs consumed/PTUs deployed)*100. When utilization is at or above 100%, calls are throttled and return a 429 error code. | `ModelDeploymentName`,`ModelName`,`ModelVersion`, `Region`, `StreamType`| ## Configure diagnostic settings
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
Sample source code for the web app is available on [GitHub](https://github.com/m
We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements. Additionally, the web app must be synchronized every time the API version being used is [retired](../api-version-deprecation.md#retiring-soon).
+Consider either clicking the **watch** or **star** buttons on the web app's [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT) repo to be notified about changes and updates to the source code.
+ **If you haven't customized the app:** * You can follow the synchronization steps below
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
Previously updated : 02/13/2024 Last updated : 04/05/2024 recommendations: false
Make sure your sign-in credential has `Cognitive Services OpenAI Contributor` ro
### Ingestion API
-See the [ingestion API reference article](/azure/ai-services/openai/reference#start-an-ingestion-job) for details on the request and response objects used by the ingestion API.
+See the [ingestion API reference article](/rest/api/azureopenai/ingestion-jobs?context=/azure/ai-services/openai/context/context) for details on the request and response objects used by the ingestion API.
More notes:
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
The operation returns a `204` status code if successful. This API only succeeds
## Speech to text
+You can use a Whisper model in Azure OpenAI Service for speech to text transcription or speech translation. For more information about using a Whisper model, see the [quickstart](./whisper-quickstart.md) and [the Whisper model overview](../speech-service/whisper-overview.md).
+ ### Request a speech to text transcription Transcribes an audio file.
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--|
-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: `flac`, `mp3`, `mp4`, `mpeg`, `mpga`, `m4a`, `ogg`, `wav`, or `webm`.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
+| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: `flac`, `mp3`, `mp4`, `mpeg`, `mpga`, `m4a`, `ogg`, `wav`, or `webm`.<br/><br/>The file size limit for the Whisper model in Azure OpenAI Service is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
| ```language``` | string | No | Null | The language of the input audio such as `fr`. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format improves accuracy and latency.<br/><br/>For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. |
The speech is returned as an audio file from the previous request.
## Management APIs
-Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update, and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
+Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update, and delete operations. The management APIs are also used for deploying models within an Azure OpenAI resource.
[**Management APIs reference documentation**](/rest/api/aiservices/)
ai-services Text To Speech Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md
echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/envi
## Clean up resources
-If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.
+If you want to clean up and remove an Azure OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.
- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
Using this approach, you can use embeddings as a search mechanism across documen
## Clean up resources
-If you created an OpenAI resource solely for completing this tutorial and want to clean up and remove an OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
+If you created an Azure OpenAI resource solely for completing this tutorial and want to clean up and remove an Azure OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
- [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
## Clean up resources
-If you want to clean up and remove an OpenAI or Azure AI Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+If you want to clean up and remove an Azure OpenAI or Azure AI Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- [Azure AI services resources](../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure AI Search resources](/azure/search/search-get-started-portal#clean-up-resources)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
New training course:
} ```
-**Content filtering is temporarily off** by default. Azure content moderation works differently than OpenAI. Azure OpenAI runs content filters during the generation call to detect harmful or abusive content and filters them from the response. [Learn MoreΓÇï](./concepts/content-filter.md)
+**Content filtering is temporarily off** by default. Azure content moderation works differently than Azure OpenAI. Azure OpenAI runs content filters during the generation call to detect harmful or abusive content and filters them from the response. [Learn MoreΓÇï](./concepts/content-filter.md)
ΓÇïThese models will be re-enabled in Q1 2023 and be on by default. ΓÇï
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
To successfully make a call against Azure OpenAI, you'll need an **endpoint** an
Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption. Create and assign persistent environment variables for your key and endpoint.
echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/envi
## Clean up resources
-If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.
+If you want to clean up and remove an Azure OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.
- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
ai-services Rest Api Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/reference/rest-api-resources.md
Select a service from the table to learn how it can help you meet your developme
| Service documentation | Description | Reference documentation | | : | : | : |
-| ![Azure AI Search icon](../../ai-services/media/service-icons/search.svg) [Azure AI Search](../../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | [Azure AI Search API](/rest/api/searchservice) |
-| ![Azure OpenAI Service icon](../../ai-services/medi)</br>&bullet; [fine-tuning](/rest/api/azureopenai/fine-tuning) |
-| ![Bot service icon](../../ai-services/media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | [Bot Service API](/azure/bot-service/rest-api/bot-framework-rest-connector-api-reference?view=azure-bot-service-4.0&preserve-view=true) |
-| ![Content Safety icon](../../ai-services/media/service-icons/content-safety.svg) [Content Safety](../../ai-services/content-safety/index.yml) | An AI service that detects unwanted contents | [Content Safety API](https://westus.dev.cognitive.microsoft.com/docs/services/content-safety-service-2023-10-15-preview/operations/TextBlocklists_AddOrUpdateBlocklistItems) |
-| ![Custom Vision icon](../../ai-services/media/service-icons/custom-vision.svg) [Custom Vision](../../ai-services/custom-vision-service/index.yml) | Customize image recognition for your business applications. |**Custom Vision APIs**<br>&bullet; [prediction](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)<br>&bullet; [training](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddebd)|
-| ![Document Intelligence icon](../../ai-services/media/service-icons/document-intelligence.svg) [Document Intelligence](../../ai-services/document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions | [Document Intelligence API](/rest/api/aiservices/document-models?view=rest-aiservices-2023-07-31&preserve-view=true) |
-| ![Face icon](../../ai-services/medi) |
-| ![Language icon](../../ai-services/media/service-icons/language.svg) [Language](../../ai-services/language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | [REST API](/rest/api/language/) |
-| ![Speech icon](../../ai-services/medi) |
-| ![Translator icon](../../ai-services/medi)|
-| ![Video Indexer icon](../../ai-services/media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer) | Extract actionable insights from your videos | [Video Indexer API](/rest/api/videoindexer/accounts?view=rest-videoindexer-2024-01-01&preserve-view=true) |
-| ![Vision icon](../../ai-services/media/service-icons/vision.svg) [Vision](../../ai-services/computer-vision/index.yml) | Analyze content in images and videos | [Vision API](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2024-02-01/operations/61d65934cd35050c20f73ab6) |
+| ![Azure AI Search icon](../media/service-icons/search.svg) [Azure AI Search](../../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | [Azure AI Search API](/rest/api/searchservice) |
+| ![Azure OpenAI Service icon](../medi)</br>&bullet; [fine-tuning](/rest/api/azureopenai/fine-tuning) |
+| ![Bot service icon](../media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | [Bot Service API](/azure/bot-service/rest-api/bot-framework-rest-connector-api-reference?view=azure-bot-service-4.0&preserve-view=true) |
+| ![Content Safety icon](../media/service-icons/content-safety.svg) [Content Safety](../content-safety/index.yml) | An AI service that detects unwanted contents | [Content Safety API](https://westus.dev.cognitive.microsoft.com/docs/services/content-safety-service-2023-10-15-preview/operations/TextBlocklists_AddOrUpdateBlocklistItems) |
+| ![Custom Vision icon](../media/service-icons/custom-vision.svg) [Custom Vision](../custom-vision-service/index.yml) | Customize image recognition for your business applications. |**Custom Vision APIs**<br>&bullet; [prediction](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)<br>&bullet; [training](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddebd)|
+| ![Document Intelligence icon](../media/service-icons/document-intelligence.svg) [Document Intelligence](../document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions | [Document Intelligence API](/rest/api/aiservices/document-models?view=rest-aiservices-2023-07-31&preserve-view=true) |
+| ![Face icon](../medi) |
+| ![Language icon](../media/service-icons/language.svg) [Language](../language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | [REST API](/rest/api/language/) |
+| ![Speech icon](../medi) |
+| ![Translator icon](../medi)|
+| ![Video Indexer icon](../media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer) | Extract actionable insights from your videos | [Video Indexer API](/rest/api/videoindexer/accounts?view=rest-videoindexer-2024-01-01&preserve-view=true) |
+| ![Vision icon](../media/service-icons/vision.svg) [Vision](../computer-vision/index.yml) | Analyze content in images and videos | [Vision API](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2024-02-01/operations/61d65934cd35050c20f73ab6) |
## Deprecated services | Service documentation | Description | Reference documentation | | | | |
-| ![Anomaly Detector icon](../../ai-services/media/service-icons/anomaly-detector.svg) [Anomaly Detector](../../ai-services/Anomaly-Detector/index.yml) <br>(deprecated 2023) | Identify potential problems early on | [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/CreateMultivariateModel) |
-| ![Content Moderator icon](../../ai-services/medi) |
-| ![Language Understanding icon](../../ai-services/media/service-icons/luis.svg) [Language understanding (LUIS)](../../ai-services/luis/index.yml) <br>(deprecated 2023) | Understand natural language in your apps | [LUIS API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) |
-| ![Metrics Advisor icon](../../ai-services/media/service-icons/metrics-advisor.svg) [Metrics Advisor](../../ai-services/metrics-advisor/index.yml) <br>(deprecated 2023) | An AI service that detects unwanted contents | [Metrics Advisor API](https://westus.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed) |
-| ![Personalizer icon](../../ai-services/media/service-icons/personalizer.svg) [Personalizer](../../ai-services/personalizer/index.yml) <br>(deprecated 2023) | Create rich, personalized experiences for each user | [Personalizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) |
-| ![QnA Maker icon](../../ai-services/media/service-icons/luis.svg) [QnA maker](../../ai-services/qnamaker/index.yml) <br>(deprecated 2022) | Distill information into easy-to-navigate questions and answers | [QnA Maker API](https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff) |
+| ![Anomaly Detector icon](../media/service-icons/anomaly-detector.svg) [Anomaly Detector](../Anomaly-Detector/index.yml) <br>(deprecated 2023) | Identify potential problems early on | [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/CreateMultivariateModel) |
+| ![Content Moderator icon](../medi) |
+| ![Language Understanding icon](../media/service-icons/luis.svg) [Language understanding (LUIS)](../luis/index.yml) <br>(deprecated 2023) | Understand natural language in your apps | [LUIS API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) |
+| ![Metrics Advisor icon](../media/service-icons/metrics-advisor.svg) [Metrics Advisor](../metrics-advisor/index.yml) <br>(deprecated 2023) | An AI service that detects unwanted contents | [Metrics Advisor API](https://westus.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed) |
+| ![Personalizer icon](../media/service-icons/personalizer.svg) [Personalizer](../personalizer/index.yml) <br>(deprecated 2023) | Create rich, personalized experiences for each user | [Personalizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) |
+| ![QnA Maker icon](../media/service-icons/luis.svg) [QnA maker](../qnamaker/index.yml) <br>(deprecated 2022) | Distill information into easy-to-navigate questions and answers | [QnA Maker API](https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff) |
## Next steps
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
To use a Whisper model for batch transcription, you need to set the `model` prop
> [!IMPORTANT] > For Whisper models, you should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API.
-Whisper models by batch transcription are supported in the East US, Southeast Asia, and West Europe regions.
+Whisper models by batch transcription are supported in the Australia East, Central US, East US, North Central US, South Central US, Southeast Asia, and West Europe regions.
::: zone pivot="rest-api" You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview2/operations/Models_ListBaseModels) request to get available base models for all locales.
ai-services Whisper Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/whisper-overview.md
Whisper Model via Azure AI Speech might be best for:
- Customization of the Whisper base model to improve accuracy for your scenario (coming soon) Regional support is another consideration. -- The Whisper model via Azure OpenAI Service is available in the following regions: North Central US and West Europe. -- The Whisper model via Azure AI Speech is available in the following regions: East US, Southeast Asia, and West Europe.
+- The Whisper model via Azure OpenAI Service is available in the following regions: EastUS 2, India South, North Central, Norway East, Sweden Central, and West Europe.
+- The Whisper model via Azure AI Speech is available in the following regions: Australia East, Central US, East US, North Central US, South Central US, Southeast Asia, and West Europe.
## Next steps
ai-services Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/configuration.md
+
+ Title: Configure containers - Translator
+
+description: The Translator container runtime environment is configured using the `docker run` command arguments. There are both required and optional settings.
+#
++++ Last updated : 04/08/2024+
+recommendations: false
++
+# Configure Translator Docker containers
+
+Azure AI services provide each container with a common configuration framework. You can easily configure your Translator containers to build Translator application architecture optimized for robust cloud capabilities and edge locality.
+
+The **Translator** container runtime environment is configured using the `docker run` command arguments. This container has both required and optional settings. The required container-specific settings are the billing settings.
+
+## Configuration settings
+
+The container has the following configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|Yes|[ApiKey](#apikey-configuration-setting)|Tracks billing information.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetric support to your container.|
+|Yes|[Billing](#billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure.|
+|Yes|[EULA](#eula-setting)| Indicates that you accepted the end-user license agreement (EULA) for the container.|
+|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
+|No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.|
+|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
+|Yes|[Mounts](#mount-settings)|Reads and writes data from the host computer to the container and from the container back to the host computer.|
+
+ > [!IMPORTANT]
+> The [**ApiKey**](#apikey-configuration-setting), [**Billing**](#billing-configuration-setting), and [**EULA**](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container.
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Translator_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+This setting can be found in the following place:
+
+* Azure portal: **Translator** resource management, under **Keys**
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Translator_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Translator_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+This setting can be found in the following place:
+
+* Azure portal: **Translator** Overview page labeled `Endpoint`
+
+| Required | Name | Data type | Description |
+| -- | - | | -- |
+| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](translator-how-to-install-container.md#required-input). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../../cognitive-services-custom-subdomains.md). |
+
+## EULA setting
++
+## Fluentd settings
++
+## HTTP/HTTPS proxy credentials settings
+
+If you need to configure an HTTP proxy for making outbound requests, use these two arguments:
+
+| Name | Data type | Description |
+|--|--|--|
+|HTTPS_PROXY|string|The proxy to use, for example, `https://proxy:8888`<br>`<proxy-url>`|
+|HTTP_PROXY_CREDS|string|Any credentials needed to authenticate against the proxy, for example, `username:password`. This value **must be in lower-case**. |
+|`<proxy-user>`|string|The user for the proxy.|
+|`<proxy-password>`|string|The password associated with `<proxy-user>` for the proxy.|
+||||
+
+```bash
+docker run --rm -it -p 5000:5000 \
+--memory 2g --cpus 1 \
+--mount type-bind,src=/home/azureuser/output,target=/output \
+<registry-location>/<image-name> \
+Eula=accept \
+Billing=<endpoint> \
+ApiKey=<api-key> \
+HTTPS_PROXY=<proxy-url> \
+HTTP_PROXY_CREDS=<proxy-user>:<proxy-password> \
+```
+
+## Logging settings
+
+Translator containers support the following logging providers:
+
+|Provider|Purpose|
+|--|--|
+|[Console](/aspnet/core/fundamentals/logging/#console-provider)|The ASP.NET Core `Console` logging provider. All of the ASP.NET Core configuration settings and default values for this logging provider are supported.|
+|[Debug](/aspnet/core/fundamentals/logging/#debug-provider)|The ASP.NET Core `Debug` logging provider. All of the ASP.NET Core configuration settings and default values for this logging provider are supported.|
+|[Disk](#disk-logging)|The JSON logging provider. This logging provider writes log data to the output mount.|
+
+* The `Logging` settings manage ASP.NET Core logging support for your container. You can use the same configuration settings and values for your container that you use for an ASP.NET Core application.
+
+* The `Logging.LogLevel` specifies the minimum level to log. The severity of the `LogLevel` ranges from 0 to 6. When a `LogLevel` is specified, logging is enabled for messages at the specified level and higher: Trace = 0, Debug = 1, Information = 2, Warning = 3, Error = 4, Critical = 5, None = 6.
+
+* Currently, Translator containers have the ability to restrict logs at the **Warning** LogLevel or higher.
+
+The general command syntax for logging is as follows:
+
+```bash
+ -Logging:LogLevel:{Provider}={FilterSpecs}
+```
+
+The following command starts the Docker container with the `LogLevel` set to **Warning** and logging provider set to **Console**. This command prints anomalous or unexpected events during the application flow to the console:
+
+```bash
+docker run --rm -it -p 5000:5000
+-v /mnt/d/TranslatorContainer:/usr/local/models \
+-e apikey={API_KEY} \
+-e eula=accept \
+-e billing={ENDPOINT_URI} \
+-e Languages=en,fr,es,ar,ru \
+-e Logging:LogLevel:Console="Warning"
+mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+
+```
+
+### Disk logging
+
+The `Disk` logging provider supports the following configuration settings:
+
+| Name | Data type | Description |
+||--|-|
+| `Format` | String | The output format for log files.<br/> **Note:** This value must be set to `json` to enable the logging provider. If this value is specified without also specifying an output mount while instantiating a container, an error occurs. |
+| `MaxFileSize` | Integer | The maximum size, in megabytes (MB), of a log file. When the size of the current log file meets or exceeds this value, the logging provider starts a new log file. If -1 is specified, the size of the log file is limited only by the maximum file size, if any, for the output mount. The default value is 1. |
+
+#### Disk provider example
+
+```bash
+docker run --rm -it -p 5000:5000 \
+--memory 2g --cpus 1 \
+--mount type-bind,src=/home/azureuser/output,target=/output \
+-e apikey={API_KEY} \
+-e eula=accept \
+-e billing={ENDPOINT_URI} \
+-e Languages=en,fr,es,ar,ru \
+Eula=accept \
+Billing=<endpoint> \
+ApiKey=<api-key> \
+Logging:Disk:Format=json \
+Mounts:Output=/output
+```
+
+For more information about configuring ASP.NET Core logging support, see [Settings file configuration](/aspnet/core/fundamentals/logging/).
+
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure AI containers](../../cognitive-services-container-support.md)
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/install-run.md
+
+ Title: Install and run Translator container using Docker API
+
+description: Use the Translator container and API to translate text and documents.
+#
++++ Last updated : 04/08/2024+
+recommendations: false
+keywords: on-premises, Docker, container, identify
++
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD033 -->
+
+# Install and run Azure AI Translator container
+
+> [!IMPORTANT]
+>
+> * To use the Translator container, you must submit an online request and have it approved. For more information, *see* [Request container access](overview.md#request-container-access).
+> * Azure AI Translator container supports limited features compared to the cloud offerings.
+
+Containers enable you to host the Azure AI Translator API on your own infrastructure. The container image includes all libraries, tools, and dependencies needed to run an application consistently in any private, public, or personal computing environment. If your security or data governance requirements can't be fulfilled by calling Azure AI Translator API remotely, containers are a good option.
+
+In this article, learn how to install and run the Translator container online with Docker API. The Azure AI Translator container supports the following operations:
+
+* **Text Translation**. Translate the contextual meaning of words or phrases from supported `source` to supported `target` language in real time. For more information, *see* [**Container: translate text**](translator-container-supported-parameters.md).
+
+* **Text Transliteration**. Convert text from one language script or writing system to another language script or writing system in real time. For more information, *see* [Container: transliterate text](transliterate-text-parameters.md).
+
+* **Document translation**. Synchronously translate documents while preserving structure and format in real time. For more information, *see* [Container:translate documents](translate-document-parameters.md).
+
+## Prerequisites
+
+To get started, you need the following resources, access approval, and tools:
+
+##### Azure resources
+
+* An active [**Azure subscription**](https://portal.azure.com/). If you don't have one, you can [**create a free 12-month account**](https://azure.microsoft.com/free/).
+
+* An approved access request to either a [Translator connected container](https://aka.ms/csgate-translator) or [Translator disconnected container](https://aka.ms/csdisconnectedcontainers).
+
+* An [**Azure AI Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource) created under the approved subscription ID. You need the API key and endpoint URI associated with your resource. Both values are required to start the container and can be found on the resource overview page in the Azure portal.
+
+ * For Translator **connected** containers, select the `S1` pricing tier.
+ * For Translator **disconnected** containers, select **`Commitment tier disconnected containers`** as your pricing tier. You only see the option to purchase a commitment tier if your disconnected container access request is approved.
+
+ :::image type="content" source="media/disconnected-pricing-tier.png" alt-text="A screenshot showing resource creation on the Azure portal.":::
+
+##### Docker tools
+
+You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).
+
+ > [!TIP]
+ >
+ > Consider adding **Docker Desktop** to your computing environment. Docker Desktop is a graphical user interface (GUI) that enables you to build, run, and share containerized applications directly from your desktop.
+ >
+ > DockerDesktop includes Docker Engine, Docker CLI client, Docker Compose and provides packages that configure Docker for your preferred operating system:
+ >
+ > * [macOS](https://docs.docker.com/docker-for-mac/),
+ > * [Windows](https://docs.docker.com/docker-for-windows/)
+ > * [Linux](https://docs.docker.com/engine/installation/#supported-platforms).
+
+|Tool|Description|Condition|
+|-|--||
+|[**Docker Engine**](https://docs.docker.com/engine/)|The **Docker Engine** is the core component of the Docker containerization platform. It must be installed on a [host computer](#host-computer-requirements) to enable you to build, run, and manage your containers.|***Required*** for all operations.|
+|[**Docker Compose**](https://docs.docker.com/compose/)| The **Docker Compose** tool is used to define and run multi-container applications.|***Required*** for [supporting containers](#use-cases-for-supporting-containers).|
+|[**Docker CLI**](https://docs.docker.com/engine/reference/commandline/cli/)|The Docker command-line interface enables you to interact with Docker Engine and manage Docker containers directly from your local machine.|***Recommended***|
+
+##### Host computer requirements
++
+##### Recommended CPU cores and memory
+
+> [!NOTE]
+> The minimum and recommended specifications are based on Docker limits, not host machine resources.
+
+The following table describes the minimum and recommended specifications and the allowable Transactions Per Second (TPS) for each container.
+
+ |Function | Minimum recommended |Notes|
+ |--|||
+ |Text translation| 4 Core, 4-GB memory ||
+ |Text transliteration| 4 Core, 2-GB memory ||
+ |Document translation | 4 Core, 6-GB memory|The number of documents that can be processed concurrently can be calculated with the following formula: [minimum of (`n-2`), (`m-6)/4`)]. <br>&bullet; `n` is number of CPU cores.<br>&bullet; `m` is GB of memory.<br>&bullet; **Example**: 8 Core, 32-GB memory can process six(6) concurrent documents [minimum of (`8-2`), `(36-6)/4)`].|
+
+* Each core must be at least 2.6 gigahertz (GHz) or faster.
+
+* For every language pair, 2 GB of memory is recommended.
+
+* In addition to baseline requirements, 4 GB of memory for every concurrent document processing.
+
+ > [!TIP]
+ > You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. For example, the following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
+ >
+ > ```docker
+ > docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
+ >
+ > IMAGE ID REPOSITORY TAG
+ > <image-id> <repository-path/name> <tag-name>
+ > ```
+
+## Required input
+
+All Azure AI containers require the following input values:
+
+* **EULA accept setting**. You must have an end-user license agreement (EULA) set with a value of `Eula=accept`.
+
+* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to your Azure AI Translator resource **Keys and Endpoint** page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
+
+* If you're translating documents, be sure to use the document translation endpoint.
+
+> [!IMPORTANT]
+>
+> * Keys are used to access your Azure AI resource. Do not share your keys. Store them securely, for example, using Azure Key Vault.
+>
+> * We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+
+## Billing
+
+* Queries to the container are billed at the pricing tier of the Azure resource used for the API `Key`.
+
+* You're billed for each container instance used to process your documents and images.
+
+* The [docker run](https://docs.docker.com/engine/reference/commandline/run/) command downloads an image from Microsoft Artifact Registry and starts the container when all three of the following options are provided with valid values:
+
+| Option | Description |
+|--|-|
+| `ApiKey` | The key of the Azure AI services resource used to track billing information.<br/>The value of this option must be set to a key for the provisioned resource specified in `Billing`. |
+| `Billing` | The endpoint of the Azure AI services resource used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.|
+| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
+
+### Connecting to Azure
+
+* The container billing argument values allow the container to connect to the billing endpoint and run.
+
+* The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run, but doesn't serve queries until the billing endpoint is restored.
+
+* A connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Azure AI container FAQ](../../../ai-services/containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
+
+## Container images and tags
+
+The Azure AI services container images can be found in the [**Microsoft Artifact Registry**](https://mcr.microsoft.com/catalog?page=3) catalog. Azure AI Translator container resides within the azure-cognitive-services/translator repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`.
+
+To use the latest version of the container, use the latest tag. You can view the full list of [Azure AI services Text Translation](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags) version tags on MCR.
+
+## Use containers
+
+Select a tab to choose your Azure AI Translator container environment:
+
+## [**Connected containers**](#tab/connected)
+
+Azure AI Translator containers enable you to run the Azure AI Translator service `on-premise` in your own environment. Connected containers run locally and send usage information to the cloud for billing.
+
+## Download and run container image
+
+The [docker run](https://docs.docker.com/engine/reference/commandline/run/) command downloads an image from Microsoft Artifact Registry and starts the container.
+
+> [!IMPORTANT]
+>
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `EULA`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start.
+> * If you're translating documents, be sure to use the document translation endpoint.
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
+-v /mnt/d/TranslatorContainer:/usr/local/models \
+-e apikey={API_KEY} \
+-e eula=accept \
+-e billing={ENDPOINT_URI} \
+-e Languages=en,fr,es,ar,ru \
+mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+```
+
+The above command:
+
+* Creates a running Translator container from a downloaded container image.
+* Allocates 12 gigabytes (GB) of memory and four CPU core.
+* Exposes transmission control protocol (TCP) port 5000 and allocates a pseudo-TTY for the container. Now, the `localhost` address points to the container itself, not your host machine.
+* Accepts the end-user agreement (EULA).
+* Configures billing endpoint.
+* Downloads translation models for languages English, French, Spanish, Arabic, and Russian.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+> [!TIP]
+> Additional Docker command:
+>
+> * `docker ps` lists running containers.
+> * `docker pause {your-container name}` pauses a running container.
+> * `docker unpause {your-container-name}` unpauses a paused container.
+> * `docker restart {your-container-name}` restarts a running container.
+> * `docker exec` enables you to execute commands lto *detach* or *set environment variables* in a running container.
+>
+> For more information, *see* [docker CLI reference](https://docs.docker.com/engine/reference/commandline/docker/).
+
+### Run multiple containers on the same host
+
+If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
+
+You can have this container and a different Azure AI container running on the HOST together. You also can have multiple containers of the same Azure AI container running.
+
+## Query the Translator container endpoint
+
+The container provides a REST-based Translator endpoint API. Here's an example request with source language (`from=en`) specified:
+
+ ```bash
+ curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS" -H "Content-Type: application/json" -d "[{'Text':'Hello, what is your name?'}]"
+ ```
+
+> [!NOTE]
+>
+> * Source language detection requires an additional container. For more information, *see* [Supporting containers](#use-cases-for-supporting-containers)
+>
+> * If the cURL POST request returns a `Service is temporarily unavailable` response the container isn't ready. Wait a few minutes, then try again.
+
+### [**Disconnected (offline) containers**](#tab/disconnected)
+
+Disconnected containers enable you to use the Azure AI Translator API by exporting the docker image to your machine with internet access and then using Docker offline. Disconnected containers are intended for scenarios where no connectivity with the cloud is needed for the containers to run.
+
+## Disconnected container commitment plan
+
+* Commitment plans for disconnected containers have a calendar year commitment period.
+
+* When you purchase a plan, you're charged the full price immediately.
+
+* During the commitment period, you can't change your commitment plan; however you can purchase more units at a pro-rated price for the remaining days in the year.
+
+* You have until midnight (UTC) on the last day of your commitment, to end or change a commitment plan.
+
+* You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section.
+
+## Create a new Translator resource and purchase a commitment plan
+
+1. Create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+1. To create your resource, enter the applicable information. Be sure to select **Commitment tier disconnected containers** as your pricing tier. You only see the option to purchase a commitment tier if you're approved.
+
+ :::image type="content" source="media/disconnected-pricing-tier.png" alt-text="A screenshot showing resource creation on the Azure portal.":::
+
+1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
+
+### End a commitment plan
+
+* If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**.
+
+* Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're still able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing.
+
+* You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you do so, you avoid charges for the following year.
+
+## Gather required parameters
+
+There are three required parameters for all Azure AI services' containers:
+
+* The end-user license agreement (EULA) must be present with a value of *accept*.
+
+* The ***Containers*** endpoint URL for your resource from the Azure portal.
+
+* The API key for your resource from the Azure portal.
+
+Both the endpoint URL and API key are needed when you first run the container to implement the disconnected usage configuration. You can find the key and endpoint on the **Key and endpoint** page for your resource in the Azure portal:
+
+ :::image type="content" source="media/keys-endpoint-container.png" alt-text="Screenshot of Azure portal keys and endpoint page.":::
+
+> [!IMPORTANT]
+> You will only use your key and endpoint to configure the container to run in a disconnected.
+> If you're translating **documents**, be sure to use the document translation endpoint.
+> environment. After you configure the container, you won't need the key and endpoint values to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
+
+## Pull and load the Translator container image
+
+1. You should have [Docker tools](#docker-tools) installed in your local environment.
+
+1. Download the Azure AI Translator container with `docker pull`.
+
+ |Docker pull command | Value |Format|
+ |-|-||
+ |&bullet; **`docker pull [image]`**</br>&bullet; **`docker pull [image]:latest`**|The latest container image.|&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation</br> </br>&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation: latest |
+ ||||
+ |&bullet; **`docker pull [image]:[version]`** | A specific container image |mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64 |
+
+ **Example Docker pull command:**
+
+ ```docker
+ docker pull mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+ ```
+
+1. Save the image to a `.tar` file.
+
+1. Load the `.tar` file to your local Docker instance. For more information, *see* [Docker: load images from a file](https://docs.docker.com/reference/cli/docker/image/load/#input).
+
+ ```bash
+ $docker load --input {path-to-your-file}.tar
+
+ ```
+
+## Configure the container to run in a disconnected environment
+
+Now that you downloaded your container, you can execute the `docker run` command with the following parameters:
+
+* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
+* **`Languages={language list}`**. You must include this parameter to download model files for the [languages](../language-support.md) you want to translate.
+
+> [!IMPORTANT]
+> The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
+
+The following example shows the formatting for the `docker run` command with placeholder values. Replace these placeholder values with your own values.
+
+| Placeholder | Value | Format|
+|:-|:-|::|
+| `[image]` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
+| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
+ | `{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, in the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Text Translation resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`|
+| `{LANGUAGES_LIST}` | List of language codes separated by commas. It's mandatory to have English (en) language as part of the list.| `en`, `fr`, `it`, `zu`, `uk` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+
+ **Example `docker run` command**
+
+```bash
+
+docker run --rm -it -p 5000:5000 \
+
+-v {MODEL_MOUNT_PATH} \
+
+-v {LICENSE_MOUNT_PATH} \
+
+-e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \
+
+-e DownloadLicense=true \
+
+-e eula=accept \
+
+-e billing={ENDPOINT_URI} \
+
+-e apikey={API_KEY} \
+
+-e Languages={LANGUAGES_LIST} \
+
+[image]
+```
+
+### Translator translation models and container configuration
+
+After you [configured the container](#configure-the-container-to-run-in-a-disconnected-environment), the values for the downloaded translation models and container configuration will be generated and displayed in the container output:
+
+```bash
+ -e MODELS= usr/local/models/model1/, usr/local/models/model2/
+ -e TRANSLATORSYSTEMCONFIG=/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832
+```
+
+## Run the container in a disconnected environment
+
+Once the license file is downloaded, you can run the container in a disconnected environment with your license, appropriate memory, and suitable CPU allocations. The following example shows the formatting of the `docker run` command with placeholder values. Replace these placeholders values with your own values.
+
+Whenever the container runs, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. In addition, an output mount must be specified so that billing usage records can be written.
+
+|Placeholder | Value | Format|
+|-|-||
+| `[image]`| The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
+|`{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `16g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
+| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/translator/license:/path/to/license/directory` |
+|`{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
+|`{MODELS_DIRECTORY_LIST}`|List of comma separated directories each having a machine translation model. | `/usr/local/models/enu_esn_generalnn_2022240501,/usr/local/models/esn_enu_generalnn_2022240501` |
+| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
+|`{TRANSLATOR_CONFIG_JSON}`| Translator system configuration file used by container internally.| `/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832` |
+
+ **Example `docker run` command**
+
+```docker
+
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+
+-v {MODEL_MOUNT_PATH} \
+
+-v {LICENSE_MOUNT_PATH} \
+
+-v {OUTPUT_MOUNT_PATH} \
+
+-e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \
+
+-e Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} \
+
+-e MODELS={MODELS_DIRECTORY_LIST} \
+
+-e TRANSLATORSYSTEMCONFIG={TRANSLATOR_CONFIG_JSON} \
+
+-e eula=accept \
+
+[image]
+```
+
+### Troubleshooting
+
+Run the container with an output mount and logging enabled. These settings enable the container to generate log files that are helpful for troubleshooting issues that occur while starting or running the container.
+
+> [!TIP]
+> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../containers/disconnected-container-faq.yml).
+++
+## Validate that a container is running
+
+There are several ways to validate that the container is running:
+
+* The container provides a homepage at `/` as a visual validation that the container is running.
+
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container can vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+
+| Request URL | Purpose |
+|--|--|
+| `http://localhost:5000/` | The container provides a home page. |
+| `http://localhost:5000/ready` | Requested with GET. Provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/status` | Requested with GET. Verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the required HTTP headers and body format. |
+++
+## Stop the container
++
+## Use cases for supporting containers
+
+Some Translator queries require supporting containers to successfully complete operations. **If you are using Office documents and don't require source language detection, only the Translator container is required.** However if source language detection is required or you're using scanned PDF documents, supporting containers are required:
+
+The following table lists the required supporting containers for your text and document translation operations. The Translator container sends billing information to Azure via the Azure AI Translator resource on your Azure account.
+
+|Operation|Request query|Document type|Supporting containers|
+|--|--|--|--|
+|&bullet; Text translation<br>&bullet; Document Translation |`from` specified. |Office documents| None|
+|&bullet; Text translation<br>&bullet; Document Translation|`from` not specified. Requires automatic language detection to determine the source language. |Office documents |✔️ [**Text analytics:language**](../../language-service/language-detection/how-to/use-containers.md) container|
+|&bullet; Text translation<br>&bullet; Document Translation |`from` specified. |Scanned PDF documents| ✔️ [**Vision:read**](../../computer-vision/computer-vision-how-to-install-containers.md) container|
+|&bullet; Text translation<br>&bullet; Document Translation|`from` not specified requiring automatic language detection to determine source language.|Scanned PDF documents| ✔️ [**Text analytics:language**](../../language-service/language-detection/how-to/use-containers.md) container<br><br>✔️ [**Vision:read**](../../computer-vision/computer-vision-how-to-install-containers.md) container|
+
+## Operate supporting containers with `docker compose`
+
+Docker compose is a tool that enables you to configure multi-container applications using a single YAML file typically named `compose.yaml`. Use the `docker compose up` command to start your container application and the `docker compose down` command to stop and remove your containers.
+
+If you installed Docker Desktop CLI, it includes Docker compose and its prerequisites. If you don't have Docker Desktop, see the [Installing Docker Compose overview](https://docs.docker.com/compose/install/).
+
+### Create your application
+
+1. Using your preferred editor or IDE, create a new directory for your app named `container-environment` or a name of your choice.
+
+1. Create a new YAML file named `compose.yaml`. Both the .yml or .yaml extensions can be used for the `compose` file.
+
+1. Copy and paste the following YAML code sample into your `compose.yaml` file. Replace `{TRANSLATOR_KEY}` and `{TRANSLATOR_ENDPOINT_URI}` with the key and endpoint values from your Azure portal Translator instance. If you're translating documents, make sure to use the `document translation endpoint`.
+
+1. The top-level name (`azure-ai-translator`, `azure-ai-language`, `azure-ai-read`) is parameter that you specify.
+
+1. The `container_name` is an optional parameter that sets a name for the container when it runs, rather than letting `docker compose` generate a name.
+
+ ```yml
+
+ azure-ai-translator:
+ container_name: azure-ai-translator
+ image: mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation:latest
+ environment:
+ - EULA=accept
+ - billing={TRANSLATOR_ENDPOINT_URI}
+ - apiKey={TRANSLATOR_KEY}
+ - AzureAiLanguageHost=http://azure-ai-language:5000
+ - AzureAiReadHost=http://azure-ai-read:5000
+ ports:
+ - "5000:5000"
+ azure-ai-language:
+ container_name: azure-ai-language
+ image: mcr.microsoft.com/azure-cognitive-services/textanalytics/language:latest
+ environment:
+ - EULA=accept
+ - billing={TRANSLATOR_ENDPOINT_URI}
+ - apiKey={TRANSLATOR_KEY}
+ azure-ai-read:
+ container_name: azure-ai-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:latest
+ environment:
+ - EULA=accept
+ - billing={TRANSLATOR_ENDPOINT_URI}
+ - apiKey={TRANSLATOR_KEY}
+ ```
+
+1. Open a terminal navigate to the `container-environment` folder, and start the containers with the following `docker-compose` command:
+
+ ```bash
+ docker compose up
+ ```
+
+1. To stop the containers, use the following command:
+
+ ```bash
+ docker compose down
+ ```
+
+ > [!TIP]
+ > Helpful Docker commands:
+ >
+ > * `docker compose pause` pauses running containers.
+ > * `docker compose unpause {your-container-name}` unpauses paused containers.
+ > * `docker compose restart` restarts all stopped and running container with all its previous changes intact. If you make changes to your `compose.yaml` configuration, these changes aren't updated with the `docker compose restart` command. You have to use the `docker compose up` command to reflect updates and changes in the `compose.yaml` file.
+ > * `docker compose ps -a` lists all containers, including those that are stopped.
+ > * `docker compose exec` enables you to execute commands to *detach* or *set environment variables* in a running container.
+ >
+ > For more information, *see* [docker CLI reference](https://docs.docker.com/engine/reference/commandline/docker/).
+
+### Translator and supporting container images and tags
+
+The Azure AI services container images can be found in the [**Microsoft Artifact Registry**](https://mcr.microsoft.com/catalog?page=3) catalog. The following table lists the fully qualified image location for text and document translation:
+
+|Container|Image location|Notes|
+|--|-||
+|Translator: Text and document translation| `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`| You can view the full list of [Azure AI services Text Translation](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags) version tags on MCR.|
+|Text analytics: language|`mcr.microsoft.com/azure-cognitive-services/textanalytics/language:latest` |You can view the full list of [Azure AI services Text Analytics Language](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/tags) version tags on MCR.|
+|Vision: read|`mcr.microsoft.com/azure-cognitive-services/vision/read:latest`|You can view the full list of [Azure AI services Computer Vision Read `OCR`](https://mcr.microsoft.com/product/azure-cognitive-services/vision/read/tags) version tags on MCR.|
+
+## Other parameters and commands
+
+Here are a few more parameters and commands you can use to run the container:
+
+#### Usage records
+
+When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST API endpoint to generate a report about service usage.
+
+#### Arguments for storing logs
+
+When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
+
+ **Example `docker run` command**
+
+```docker
+docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
+```
+
+#### Environment variable names in Kubernetes deployments
+
+* Some Azure AI Containers, for example Translator, require users to pass environmental variable names that include colons (`:`) when running the container.
+
+* Kubernetes doesn't accept colons in environmental variable names.
+To resolve, you can replace colons with two underscore characters (`__`) when deploying to Kubernetes. See the following example of an acceptable format for environmental variable names:
+
+```Kubernetes
+ env:
+ - name: Mounts__License
+ value: "/license"
+ - name: Mounts__Output
+ value: "/output"
+```
+
+This example replaces the default format for the `Mounts:License` and `Mounts:Output` environment variable names in the docker run command.
+
+#### Get usage records using the container endpoints
+
+The container provides two endpoints for returning records regarding its usage.
+
+#### Get all records
+
+The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
+
+```HTTP
+https://<service>/records/usage-logs/
+```
+
+***Example HTTPS endpoint to retrieve all records***
+
+ `http://localhost:5000/records/usage-logs`
+
+#### Get records for a specific month
+
+The following endpoint provides a report summarizing usage over a specific month and year:
+
+```HTTP
+https://<service>/records/usage-logs/{MONTH}/{YEAR}
+```
+
+***Example HTTPS endpoint to retrieve records for a specific month and year***
+
+ `http://localhost:5000/records/usage-logs/03/2024`
+
+The usage-logs endpoints return a JSON response similar to the following example:
+
+***Connected container***
+
+The `quantity` is the amount you're charged for connected container usage.
+
+ ```json
+ {
+ "apiType": "string",
+ "serviceName": "string",
+ "meters": [
+ {
+ "name": "string",
+ "quantity": 256345435
+ }
+ ]
+ }
+ ```
+
+***Disconnected container***
+
+ ```json
+ {
+ "type": "CommerceUsageResponse",
+ "meters": [
+ {
+ "name": "CognitiveServices.TextTranslation.Container.OneDocumentTranslatedCharacters",
+ "quantity": 1250000,
+ "billedUnit": 1875000
+ },
+ {
+ "name": "CognitiveServices.TextTranslation.Container.TranslatedCharacters",
+ "quantity": 1250000,
+ "billedUnit": 1250000
+ }
+ ],
+ "apiType": "texttranslation",
+ "serviceName": "texttranslation"
+ }
+ ```
+
+The aggregated value of `billedUnit` for the following meters is counted towards the characters you licensed for your disconnected container usage:
+
+* `CognitiveServices.TextTranslation.Container.OneDocumentTranslatedCharacters`
+
+* `CognitiveServices.TextTranslation.Container.TranslatedCharacters`
+
+### Summary
+
+In this article, you learned concepts and workflows for downloading, installing, and running an Azure AI Translator container:
+
+* Azure AI Translator container supports text translation, synchronous document translation, and text transliteration.
+
+* Container images are downloaded from the container registry and run in Docker.
+
+* The billing information must be specified when you instantiate a container.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure AI container configuration](translator-container-configuration.md) [Learn more about container language support](../language-support.md#translation).
+
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/overview.md
+
+ Title: What is Azure AI Translator container?
+
+description: Translate text and documents using the Azure AI Translator container.
++++ Last updated : 04/08/2024+++
+# What is Azure AI Translator container?
+
+> [!IMPORTANT]
+>
+> * To use the Translator container, you must submit an online request and have it approved. For more information, *see* [Request container access](#request-container-access).
+> * Azure AI Translator container supports limited features compared to the cloud offerings. For more information, *see* [**Container translate methods**](translator-container-supported-parameters.md).
+
+Azure AI Translator container enables you to build translator application architecture that is optimized for both robust cloud capabilities and edge locality. A container is a running instance of an executable software image. The Translator container image includes all libraries, tools, and dependencies needed to run an application consistently in any private, public, or personal computing environment. Containers are isolated, lightweight, portable, and are great for implementing specific security or data governance requirements. Translator container is available in [connected](#connected-containers) and [disconnected (offline)](#disconnected-containers) modalities.
+
+## Connected containers
+
+* **Translator connected container** is deployed on premises and processes content in your environment. It requires internet connectivity to transmit usage metadata for billing; however, your customer content isn't transmitted outside of your premises.
+
+You're billed for connected containers monthly, based on the usage and consumption. The container needs to be configured to send metering data to Azure, and transactions are billed accordingly. Queries to the container are billed at the pricing tier of the Azure resource used for the API Key. You're billed for each container instance used to process your documents and images.
+
+ ***Sample billing metadata transmitted by Translator connected container***
+
+ The `quantity` is the amount you're charged for connected container usage.
+
+ ```json
+ {
+ "apiType": "texttranslation",
+ "id": "ab1cf234-0056-789d-e012-f3ghi4j5klmn",
+ "containerType": "123a5bc06d7e",
+ "quantity": 125000
+
+ }
+ ```
+
+## Disconnected containers
+
+* **Translator disconnected container** is deployed on premises and processes content in your environment. It doesn't require internet connectivity at runtime. Customer must license the container for projected usage over a year and is charged affront.
+
+Disconnected containers are offered through commitment tier pricing offered at a discounted rate compared to pay-as-you-go pricing. With commitment tier pricing, you can commit to using Translator Service features for a fixed fee, at a predictable total cost, based on the needs of your workload. Commitment plans for disconnected containers have a calendar year commitment period.
+
+When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan; however you can purchase more units at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+
+ ***Sample billing metadata transmitted by Translator disconnected container***
+
+ ```json
+ {
+ "type": "CommerceUsageResponse",
+ "meters": [
+ {
+ "name": "CognitiveServices.TextTranslation.Container.OneDocumentTranslatedCharacters",
+ "quantity": 1250000,
+ "billedUnit": 1875000
+ },
+ {
+ "name": "CognitiveServices.TextTranslation.Container.TranslatedCharacters",
+ "quantity": 1250000,
+ "billedUnit": 1250000
+ }
+ ],
+ "apiType": "texttranslation",
+ "serviceName": "texttranslation"
+ }
+```
+
+The aggregated value of `billedUnit` for the following meters is counted towards the characters you licensed for your disconnected container usage:
+
+* `CognitiveServices.TextTranslation.Container.OneDocumentTranslatedCharacters`
+
+* `CognitiveServices.TextTranslation.Container.TranslatedCharacters`
++
+## Request container access
+
+Translator containers are a gated offering. To use the Translator container, you must submit an online request and for approval.
+
+* To request access to a connected container, complete and submit the [**connected container access request form**](https://aka.ms/csgate-translator).
+
+* To request access t a disconnected container, complete and submit the [**disconnected container request form**](https://aka.ms/csdisconnectedcontainers).
+
+* The form requests information about you, your company, and the user scenario for which you use the container. After you submit the form, the Azure AI services team reviews it and emails you with a decision within 10 business days.
+
+ > [!IMPORTANT]
+ > ✔️ On the form, you must use an email address associated with an Azure subscription ID.
+ >
+ > ✔️ The Azure resource you use to run the container must have been created with the approved Azure subscription ID.
+ >
+ > ✔️ Check your email (both inbox and junk folders) for updates on the status of your application from Microsoft.
+
+* After you're approved, you'll be able to run the container after you download it from the Microsoft Container Registry (MCR).
+
+* You can't access the container if your Azure subscription is't approved.
+
+## Next steps
+
+[Install and run Azure AI translator containers](install-run.md).
ai-services Translate Document Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translate-document-parameters.md
+
+ Title: "Container: Translate document method"
+
+description: Understand the parameters, headers, and body request/response messages for the Azure AI Translator container translate document operation.
+#
+++++ Last updated : 04/08/2024+++
+# Container: Translate Documents
+
+**Translate document with source language specified**.
+
+## Request URL (using cURL)
+
+`POST` request:
+
+```http
+ POST {Endpoint}/translate?api-version=3.0&to={to}
+```
+
+***With optional parameters***
+
+```http
+POST {Endpoint}/translate?api-version=3.0&from={from}&to={to}&textType={textType}&category={category}&profanityAction={profanityAction}&profanityMarker={profanityMarker}&includeAlignment={includeAlignment}&includeSentenceLength={includeSentenceLength}&suggestedFrom={suggestedFrom}&fromScript={fromScript}&toScript={toScript}
+```
+
+Example:
+
+```bash
+`curl -i -X POST "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2023-11-01-preview" -F "document={path-to-your-document-with-file-extension};type={ContentType}/{file-extension" -o "{path-to-output-file-with-file-extension}"`
+```
+
+## Synchronous request headers and parameters
+
+Use synchronous translation processing to send a document as part of the HTTP request body and receive the translated document in the HTTP response.
+
+|Query parameter&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;|Description| Condition|
+|||-|
+|`-X` or `--request` `POST`|The -X flag specifies the request method to access the API.|*Required* |
+|`{endpoint}` |The URL for your Document Translation resource endpoint|*Required* |
+|`targetLanguage`|Specifies the language of the output document. The target language must be one of the supported languages included in the translation scope.|*Required* |
+|`sourceLanguage`|Specifies the language of the input document. If the `sourceLanguage` parameter isn't specified, automatic language detection is applied to determine the source language. |*Optional*|
+|`-H` or `--header` `"Ocp-Apim-Subscription-Key:{KEY}` | Request header that specifies the Document Translation resource key authorizing access to the API.|*Required*|
+|`-F` or `--form` |The filepath to the document that you want to include with your request. Only one source document is allowed.|*Required*|
+|&bull; `document=`<br> &bull; `type={contentType}/fileExtension` |&bull; Path to the file location for your source document.</br> &bull; Content type and file extension.</br></br> Ex: **"document=@C:\Test\test-file.md;type=text/markdown**|*Required*|
+|`-o` or `--output`|The filepath to the response results.|*Required*|
+|`-F` or `--form` |The filepath to an optional glossary to include with your request. The glossary requires a separate `--form` flag.|*Optional*|
+| &bull; `glossary=`<br> &bull; `type={contentType}/fileExtension`|&bull; Path to the file location for your optional glossary file.</br> &bull; Content type and file extension.</br></br> Ex: **"glossary=@C:\Test\glossary-file.txt;type=text/plain**|*Optional*|
+
+✔️ For more information on **`contentType`**, *see* [**Supported document formats**](../document-translation/overview.md#synchronous-supported-document-formats).
+
+## Code sample: document translation
+
+> [!NOTE]
+>
+> * Each sample runs on the `localhost` that you specified with the `docker compose up` command.
+> * While your container is running, `localhost` points to the container itself.
+> * You don't have to use `localhost:5000`. You can use any port that is not already in use in your host environment.
+
+### Sample document
+
+For this project, you need a source document to translate. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for and store it in the same folder as your `compose.yaml` file (`container-environment`). The file name is `document-translation-sample.docx` and the source language is English.
+
+### Query Azure AI Translator endpoint (document)
+
+Here's an example cURL HTTP request using localhost:5000:
+
+```bash
+curl -v "http://localhost:5000/translator/documents:translateDocument?from=en&to=es&api-version=v1.0" -F "document=@document-translation-sample-docx"
+```
+
+***Upon successful completion***:
+
+* The translated document is returned with the response.
+* The successful POST method returns a `200 OK` response code indicating that the service created the request.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about synchronous document translation](../document-translation/reference/synchronous-rest-api-guide.md)
ai-services Translate Text Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translate-text-parameters.md
+
+ Title: "Container: Translate text method"
+
+description: Understand the parameters, headers, and body messages for the Azure AI Translator container translate document operation.
+++++ Last updated : 04/08/2024+++
+# Container: Translate Text
+
+**Translate text**.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+POST {Endpoint}/translate?api-version=3.0&&from={from}&to={to}
+```
+
+***Example request***
+
+```rest
+POST https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=es
+
+[
+ {
+ "Text": "I would really like to drive your car."
+ }
+]
+
+```
+
+***Example response***
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "Realmente me gustaría conducir su coche.",
+ "to": "es"
+ }
+ ]
+ }
+]
+```
++
+## Request parameters
+
+Request parameters passed on the query string are:
+
+### Required parameters
+
+| Query parameter | Description |Condition|
+| | ||
+| api-version | Version of the API requested by the client. Value must be `3.0`. |*Required parameter*|
+| from |Specifies the language of the input text.|*Required parameter*|
+| to |Specifies the language of the output text. For example, use `to=de` to translate to German.<br>It's possible to translate to multiple languages simultaneously by repeating the parameter in the query string. For example, use `to=de&to=it` to translate to German and Italian. |*Required parameter*|
+
+* You can query the service for `translation` scope [supported languages](../reference/v3-0-languages.md).
+* *See also* [Language support for transliteration](../language-support.md#translation).
+
+### Optional parameters
+
+| Query parameter | Description |
+| | |
+| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
+| includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
+
+### Request headers
+
+| Headers | Description |Condition|
+| | ||
+| Authentication headers |*See* [available options for authentication](../reference/v3-0-reference.md#authentication). |*Required request header*|
+| Content-Type |Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. |*Required request header*|
+| Content-Length |The length of the request body. |*Optional*|
+| X-ClientTraceId | A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |*Optional*|
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to translate.
+
+```json
+[
+ {"Text":"I would really like to drive your car around the block a few times."}
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 100 elements.
+* The entire text included in the request can't exceed 10,000 characters including spaces.
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
+
+* `to`: A string representing the language code of the target language.
+
+* `text`: A string giving the translated text.
+
+* `sentLen`: An object returning sentence boundaries in the input and output texts.
+
+* `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+* `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+ Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
+
+ * `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
+
+## Response headers
+
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request and used for troubleshooting purposes. |
+| X-MT-System | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: </br></br>&FilledVerySmallSquare; Custom - Request includes a custom system and at least one custom system was used during translation.</br>&FilledVerySmallSquare; Team - All other requests |
+
+## Response status codes
+
+If an error occurs, the request returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](../reference/v3-0-reference.md#errors).
+
+## Code samples: translate text
+
+> [!NOTE]
+>
+> * Each sample runs on the `localhost` that you specified with the `docker run` command.
+> * While your container is running, `localhost` points to the container itself.
+> * You don't have to use `localhost:5000`. You can use any port that is not already in use in your host environment.
+> To specify a port, use the `-p` option.
+
+### Translate a single input
+
+This example shows how to translate a single sentence from English to Simplified Chinese.
+
+```bash
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```json
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+The `translations` array includes one element, which provides the translation of the single piece of text in the input.
+
+### Query Azure AI Translator endpoint (text)
+
+Here's an example cURL HTTP request using localhost:5000 that you specified with the `docker run` command:
+
+```bash
+ curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS"
+ -H "Content-Type: application/json" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+> [!NOTE]
+> If you attempt the cURL POST request before the container is ready, you'll end up getting a *Service is temporarily unavailable* response. Wait until the container is ready, then try again.
+
+### Translate text using Swagger API
+
+#### English &leftrightarrow; German
+
+1. Navigate to the Swagger page: `http://localhost:5000/swagger/https://docsupdatetracker.net/index.html`
+1. Select **POST /translate**
+1. Select **Try it out**
+1. Enter the **From** parameter as `en`
+1. Enter the **To** parameter as `de`
+1. Enter the **api-version** parameter as `3.0`
+1. Under **texts**, replace `string` with the following JSON
+
+```json
+ [
+ {
+ "text": "hello, how are you"
+ }
+ ]
+```
+
+Select **Execute**, the resulting translations are output in the **Response Body**. You should see the following response:
+
+```json
+"translations": [
+ {
+ "text": "hallo, wie geht es dir",
+ "to": "de"
+ }
+ ]
+```
+
+### Translate text with Python
+
+#### English &leftrightarrow; French
+
+```python
+import requests, json
+
+url = 'http://localhost:5000/translate?api-version=3.0&from=en&to=fr'
+headers = { 'Content-Type': 'application/json' }
+body = [{ 'text': 'Hello, how are you' }]
+
+request = requests.post(url, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(
+ response,
+ sort_keys=True,
+ indent=4,
+ ensure_ascii=False,
+ separators=(',', ': ')))
+```
+
+### Translate text with C#/.NET console app
+
+#### English &leftrightarrow; Spanish
+
+Launch Visual Studio, and create a new console application. Edit the `*.csproj` file to add the `<LangVersion>7.1</LangVersion>` nodeΓÇöspecifies C# 7.1. Add the [Newtoonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) NuGet package version 11.0.2.
+
+In the `Program.cs` replace all the existing code with the following script:
+
+```csharp
+using Newtonsoft.Json;
+using System;
+using System.Net.Http;
+using System.Text;
+using System.Threading.Tasks;
+
+namespace TranslateContainer
+{
+ class Program
+ {
+ const string ApiHostEndpoint = "http://localhost:5000";
+ const string TranslateApi = "/translate?api-version=3.0&from=en&to=es";
+
+ static async Task Main(string[] args)
+ {
+ var textToTranslate = "Sunny day in Seattle";
+ var result = await TranslateTextAsync(textToTranslate);
+
+ Console.WriteLine(result);
+ Console.ReadLine();
+ }
+
+ static async Task<string> TranslateTextAsync(string textToTranslate)
+ {
+ var body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ var client = new HttpClient();
+ using (var request =
+ new HttpRequestMessage
+ {
+ Method = HttpMethod.Post,
+ RequestUri = new Uri($"{ApiHostEndpoint}{TranslateApi}"),
+ Content = new StringContent(requestBody, Encoding.UTF8, "application/json")
+ })
+ {
+ // Send the request and await a response.
+ var response = await client.SendAsync(request);
+
+ return await response.Content.ReadAsStringAsync();
+ }
+ }
+ }
+}
+```
+
+### Translate multiple strings
+
+Translating multiple strings at once is simply a matter of specifying an array of strings in the request body.
+
+```bash
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}, {'Text':'I am fine, thank you.'}]"
+```
+
+The response contains the translation of all pieces of text in the exact same order as in the request.
+The response body is:
+
+```json
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
+ ]
+ },
+ {
+ "translations":[
+ {"text":"我很好,谢谢你。","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+### Translate to multiple languages
+
+This example shows how to translate the same input to several languages in one request.
+
+```bash
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```json
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"},
+ {"text":"Hallo, was ist dein Name?","to":"de"}
+ ]
+ }
+]
+```
+
+### Translate content with markup and specify translated content
+
+It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element isn't translated, while the content in the second `div` element is translated.
+
+```html
+<div class="notranslate">This will not be translated.</div>
+<div>This will be translated. </div>
+```
+
+Here's a sample request to illustrate.
+
+```bash
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
+```
+
+The response is:
+
+```json
+[
+ {
+ "translations":[
+ {"text":"<div class=\"notranslate\">This will not be translated.</div><div>这将被翻译。</div>","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+### Translate with dynamic dictionary
+
+If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is only safe for proper nouns such as personal names and product names.
+
+The markup to supply uses the following syntax.
+
+```html
+<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
+```
+
+For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request:
+
+```bash
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">word or phrase</mstrans:dictionary> is a dictionary entry.'}]"
+```
+
+The result is:
+
+```json
+[
+ {
+ "translations":[
+ {"text":"Das Wort \"wordomatic\" ist ein W├╢rterbucheintrag.","to":"de"}
+ ]
+ }
+]
+```
+
+This feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you created training data that shows your work or phrase in context, you get better results. [Learn more about Custom Translator](../custom-translator/concepts/customization.md).
+
+## Request limits
+
+Each translate request is limited to 10,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. We recommended sending shorter requests.
+
+The following table lists array element and character limits for the Translator **translation** operation.
+
+| Operation | Maximum size of array element | Maximum number of array elements | Maximum request size (characters) |
+|:-|:-|:-|:-|
+| translate | 10,000 | 100 | 10,000 |
+
+## Use docker compose: Translator with supporting containers
+
+Docker compose is a tool enables you to configure multi-container applications using a single YAML file typically named `compose.yaml`. Use the `docker compose up` command to start your container application and the `docker compose down` command to stop and remove your containers.
+
+If you installed Docker Desktop CLI, it includes Docker compose and its prerequisites. If you don't have Docker Desktop, see the [Installing Docker Compose overview](https://docs.docker.com/compose/install/).
+
+The following table lists the required supporting containers for your text and document translation operations. The Translator container sends billing information to Azure via the Azure AI Translator resource on your Azure account.
+
+|Operation|Request query|Document type|Supporting containers|
+|--|--|--|--|
+|&bullet; Text translation<br>&bullet; Document Translation |`from` specified. |Office documents| None|
+|&bullet; Text translation<br>&bullet; Document Translation|`from` not specified. Requires automatic language detection to determine the source language. |Office documents |✔️ [**Text analytics:language**](../../language-service/language-detection/how-to/use-containers.md) container|
+|&bullet; Text translation<br>&bullet; Document Translation |`from` specified. |Scanned PDF documents| ✔️ [**Vision:read**](../../computer-vision/computer-vision-how-to-install-containers.md) container|
+|&bullet; Text translation<br>&bullet; Document Translation|`from` not specified requiring automatic language detection to determine source language.|Scanned PDF documents| ✔️ [**Text analytics:language**](../../language-service/language-detection/how-to/use-containers.md) container<br><br>✔️ [**Vision:read**](../../computer-vision/computer-vision-how-to-install-containers.md) container|
+
+##### Container images and tags
+
+The Azure AI services container images can be found in the [**Microsoft Artifact Registry**](https://mcr.microsoft.com/catalog?page=3) catalog. The following table lists the fully qualified image location for text and document translation:
+
+|Container|Image location|Notes|
+|--|-||
+|Translator: Text translation| `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`| You can view the full list of [Azure AI services Text Translation](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags) version tags on MCR.|
+|Translator: Document translation|**TODO**| **TODO**|
+|Text analytics: language|`mcr.microsoft.com/azure-cognitive-services/textanalytics/language:latest` |You can view the full list of [Azure AI services Text Analytics Language](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/tags) version tags on MCR.|
+|Vision: read|`mcr.microsoft.com/azure-cognitive-services/vision/read:latest`|You can view the full list of [Azure AI services Computer Vision Read `OCR`](https://mcr.microsoft.com/product/azure-cognitive-services/vision/read/tags) version tags on MCR.|
+
+### Create your application
+
+1. Using your preferred editor or IDE, create a new directory for your app named `container-environment` or a name of your choice.
+1. Create a new YAML file named `compose.yaml`. Both the .yml or .yaml extensions can be used for the `compose` file.
+1. Copy and paste the following YAML code sample into your `compose.yaml` file. Replace `{TRANSLATOR_KEY}` and `{TRANSLATOR_ENDPOINT_URI}` with the key and endpoint values from your Azure portal Translator instance. Make sure you use the `document translation endpoint`.
+1. The top-level name (`azure-ai-translator`, `azure-ai-language`, `azure-ai-read`) is parameter that you specify.
+1. The `container_name` is an optional parameter that sets a name for the container when it runs, rather than letting `docker compose` generate a name.
+
+ ```yml
+
+ azure-ai-translator:
+ container_name: azure-ai-translator
+ image: mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation:latest
+ environment:
+ - EULA=accept
+ - billing={TRANSLATOR_ENDPOINT_URI}
+ - apiKey={TRANSLATOR_KEY}
+ - AzureAiLanguageHost=http://azure-ai-language:5000
+ - AzureAiReadHost=http://azure-ai-read:5000
+ ports:
+ - "5000:5000"
+ azure-ai-language:
+ container_name: azure-ai-language
+ image: mcr.microsoft.com/azure-cognitive-services/textanalytics/language:latest
+ environment:
+ - EULA=accept
+ - billing={TRANSLATOR_ENDPOINT_URI}
+ - apiKey={TRANSLATOR_KEY}
+ azure-ai-read:
+ container_name: azure-ai-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:latest
+ environment:
+ - EULA=accept
+ - billing={TRANSLATOR_ENDPOINT_URI}
+ - apiKey={TRANSLATOR_KEY}
+ ```
+
+1. Open a terminal navigate to the `container-environment` folder, and start the containers with the following `docker-compose` command:
+
+ ```bash
+ docker compose up
+ ```
+
+1. To stop the containers, use the following command:
+
+ ```bash
+ docker compose down
+ ```
+
+ > [!TIP]
+ > **`docker compose` commands:**
+ >
+ > * `docker compose pause` pauses running containers.
+ > * `docker compose unpause {your-container-name}` unpauses paused containers.
+ > * `docker compose restart` restarts all stopped and running container with all its previous changes intact. If you make changes to your `compose.yaml` configuration, these changes aren't updated with the `docker compose restart` command. You have to use the `docker compose up` command to reflect updates and changes in the `compose.yaml` file.
+ > * `docker compose ps -a` lists all containers, including those that are stopped.
+ > * `docker compose exec` enables you to execute commands to *detach* or *set environment variables* in a running container.
+ >
+ > For more information, *see* [docker CLI reference](https://docs.docker.com/engine/reference/commandline/docker/).
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Learn more about text translation](../translator-text-apis.md#translate-text)
ai-services Translator Container Supported Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-container-supported-parameters.md
- Title: "Container: Translate method"-
-description: Understand the parameters, headers, and body messages for the container Translate method of Azure AI Translator to translate text.
-#
----- Previously updated : 07/18/2023---
-# Container: Translate
-
-Translate text.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-http://localhost:{port}/translate?api-version=3.0
-```
-
-Example: http://<span></span>localhost:5000/translate?api-version=3.0
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-### Required parameters
-
-| Query parameter | Description |
-| | |
-| api-version | _Required parameter_. <br>Version of the API requested by the client. Value must be `3.0`. |
-| from | _Required parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope.|
-| to | _Required parameter_. <br>Specifies the language of the output text. The target language must be one of the [supported languages](../reference/v3-0-languages.md) included in the `translation` scope. For example, use `to=de` to translate to German. <br>It's possible to translate to multiple languages simultaneously by repeating the parameter in the query string. For example, use `to=de&to=it` to translate to German and Italian. |
-
-### Optional parameters
-
-| Query parameter | Description |
-| | |
-| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
-| includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
-
-Request headers include:
-
-| Headers | Description |
-| | |
-| Authentication header(s) | _Required request header_. <br>See [available options for authentication](../reference/v3-0-reference.md#authentication). |
-| Content-Type | _Required request header_. <br>Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. |
-| Content-Length | _Required request header_. <br>The length of the request body. |
-| X-ClientTraceId | _Optional_. <br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to translate.
-
-```json
-[
- {"Text":"I would really like to drive your car around the block a few times."}
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 100 elements.
-* The entire text included in the request can't exceed 10,000 characters including spaces.
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
-* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
-
-* `to`: A string representing the language code of the target language.
-
-* `text`: A string giving the translated text.
-
-* `sentLen`: An object returning sentence boundaries in the input and output texts.
-
-* `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
-
-* `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
-
- Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
-
- * `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
-
-Examples of JSON responses are provided in the [examples](#examples) section.
-
-## Response headers
-
-| Headers | Description |
-| | |
-| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
-| X-MT-System | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: </br></br>&FilledVerySmallSquare; Custom - Request includes a custom system and at least one custom system was used during translation.</br>&FilledVerySmallSquare; Team - All other requests |
-
-## Response status codes
-
-If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](../reference/v3-0-reference.md#errors).
-
-## Examples
-
-### Translate a single input
-
-This example shows how to translate a single sentence from English to Simplified Chinese.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
- ]
- }
-]
-```
-
-The `translations` array includes one element, which provides the translation of the single piece of text in the input.
-
-### Translate multiple pieces of text
-
-Translating multiple strings at once is simply a matter of specifying an array of strings in the request body.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}, {'Text':'I am fine, thank you.'}]"
-```
-
-The response contains the translation of all pieces of text in the exact same order as in the request.
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
- ]
- },
- {
- "translations":[
- {"text":"我很好,谢谢你。","to":"zh-Hans"}
- ]
- }
-]
-```
-
-### Translate to multiple languages
-
-This example shows how to translate the same input to several languages in one request.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"},
- {"text":"Hallo, was ist dein Name?","to":"de"}
- ]
- }
-]
-```
-
-### Translate content with markup and decide what's translated
-
-It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated.
-
-```
-<div class="notranslate">This will not be translated.</div>
-<div>This will be translated. </div>
-```
-
-Here's a sample request to illustrate.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
-```
-
-The response is:
-
-```
-[
- {
- "translations":[
- {"text":"<div class=\"notranslate\">This will not be translated.</div><div>这将被翻译。</div>","to":"zh-Hans"}
- ]
- }
-]
-```
-
-### Translate with dynamic dictionary
-
-If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is only safe for proper nouns such as personal names and product names.
-
-The markup to supply uses the following syntax.
-
-```
-<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
-```
-
-For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request:
-
-```
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">word or phrase</mstrans:dictionary> is a dictionary entry.'}]"
-```
-
-The result is:
-
-```
-[
- {
- "translations":[
- {"text":"Das Wort \"wordomatic\" ist ein W├╢rterbucheintrag.","to":"de"}
- ]
- }
-]
-```
-
-This feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you've created training data that shows your work or phrase in context, you'll get much better results. [Learn more about Custom Translator](../custom-translator/concepts/customization.md).
-
-## Request limits
-
-Each translate request is limited to 10,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests.
-
-The following table lists array element and character limits for the Translator **translation** operation.
-
-| Operation | Maximum size of array element | Maximum number of array elements | Maximum request size (characters) |
-|:-|:-|:-|:-|
-| translate | 10,000 | 100 | 10,000 |
ai-services Translator Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-disconnected-containers.md
- Title: Use Translator Docker containers in disconnected environments-
-description: Learn how to run Azure AI Translator containers in disconnected environments.
-#
---- Previously updated : 07/28/2023---
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-# Use Translator containers in disconnected environments
-
- Azure AI Translator containers allow you to use Translator Service APIs with the benefits of containerization. Disconnected containers are offered through commitment tier pricing offered at a discounted rate compared to pay-as-you-go pricing. With commitment tier pricing, you can commit to using Translator Service features for a fixed fee, at a predictable total cost, based on the needs of your workload.
-
-## Get started
-
-Before attempting to run a Docker container in an offline environment, make sure you're familiar with the following requirements to successfully download and use the container:
-
-* Host computer requirements and recommendations.
-* The Docker `pull` command to download the container.
-* How to validate that a container is running.
-* How to send queries to the container's endpoint, once it's running.
-
-## Request access to use containers in disconnected environments
-
-Complete and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the Internet.
--
-Access is limited to customers that meet the following requirements:
-
-* Your organization should be identified as strategic customer or partner with Microsoft.
-* Disconnected containers are expected to run fully offline, hence your use cases must meet at least one of these or similar requirements:
- * Environment or device(s) with zero connectivity to internet.
- * Remote location that occasionally has internet access.
- * Organization under strict regulation of not sending any kind of data back to cloud.
-* Application completed as instructed. Make certain to pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
-
-## Create a new resource and purchase a commitment plan
-
-1. Create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
-
-1. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
-
- > [!NOTE]
- >
- > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
-
- :::image type="content" source="../media/create-resource-offline-container.png" alt-text="A screenshot showing resource creation on the Azure portal.":::
-
-1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
-
-## Gather required parameters
-
-There are three required parameters for all Azure AI services' containers:
-
-* The end-user license agreement (EULA) must be present with a value of *accept*.
-* The endpoint URL for your resource from the Azure portal.
-* The API key for your resource from the Azure portal.
-
-Both the endpoint URL and API key are needed when you first run the container to configure it for disconnected usage. You can find the key and endpoint on the **Key and endpoint** page for your resource in the Azure portal:
-
- :::image type="content" source="../media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot of Azure portal keys and endpoint page.":::
-
-> [!IMPORTANT]
-> You will only use your key and endpoint to configure the container to run in a disconnected environment. After you configure the container, you won't need the key and endpoint values to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
-
-## Download a Docker container with `docker pull`
-
-Download the Docker container that has been approved to run in a disconnected environment. For example:
-
-|Docker pull command | Value |Format|
-|-|-||
-|&bullet; **`docker pull [image]`**</br>&bullet; **`docker pull [image]:latest`**|The latest container image.|&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation</br> </br>&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation: latest |
-|||
-|&bullet; **`docker pull [image]:[version]`** | A specific container image |mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64 |
-
- **Example Docker pull command**
-
-```docker
-docker pull mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
-```
-
-## Configure the container to run in a disconnected environment
-
-Now that you've downloaded your container, you need to execute the `docker run` command with the following parameters:
-
-* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
-* **`Languages={language list}`**. You must include this parameter to download model files for the [languages](../language-support.md) you want to translate.
-
-> [!IMPORTANT]
-> The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
-
-The following example shows the formatting for the `docker run` command with placeholder values. Replace these placeholder values with your own values.
-
-| Placeholder | Value | Format|
-|-|-||
-| `[image]` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
-| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
- | `{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, in the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Text Translation resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`|
-| `{LANGUAGES_LIST}` | List of language codes separated by commas. It's mandatory to have English (en) language as part of the list.| `en`, `fr`, `it`, `zu`, `uk` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-
- **Example `docker run` command**
-
-```docker
-
-docker run --rm -it -p 5000:5000 \
---v {MODEL_MOUNT_PATH} \---v {LICENSE_MOUNT_PATH} \---e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \---e DownloadLicense=true \---e eula=accept \---e billing={ENDPOINT_URI} \---e apikey={API_KEY} \---e Languages={LANGUAGES_LIST} \-
-[image]
-```
-
-### Translator translation models and container configuration
-
-After you've [configured the container](#configure-the-container-to-run-in-a-disconnected-environment), the values for the downloaded translation models and container configuration will be generated and displayed in the container output:
-
-```bash
- -e MODELS= usr/local/models/model1/, usr/local/models/model2/
- -e TRANSLATORSYSTEMCONFIG=/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832
-```
-
-## Run the container in a disconnected environment
-
-Once the license file has been downloaded, you can run the container in a disconnected environment with your license, appropriate memory, and suitable CPU allocations. The following example shows the formatting of the `docker run` command with placeholder values. Replace these placeholders values with your own values.
-
-Whenever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. In addition, an output mount must be specified so that billing usage records can be written.
-
-Placeholder | Value | Format|
-|-|-||
-| `[image]`| The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `16g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/translator/license:/path/to/license/directory` |
-|`{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
-|`{MODELS_DIRECTORY_LIST}`|List of comma separated directories each having a machine translation model. | `/usr/local/models/enu_esn_generalnn_2022240501,/usr/local/models/esn_enu_generalnn_2022240501` |
-| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
-|`{TRANSLATOR_CONFIG_JSON}`| Translator system configuration file used by container internally.| `/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832` |
-
- **Example `docker run` command**
-
-```docker
-
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
---v {MODEL_MOUNT_PATH} \---v {LICENSE_MOUNT_PATH} \---v {OUTPUT_MOUNT_PATH} \---e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \---e Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} \---e MODELS={MODELS_DIRECTORY_LIST} \---e TRANSLATORSYSTEMCONFIG={TRANSLATOR_CONFIG_JSON} \---e eula=accept \-
-[image]
-```
-
-## Other parameters and commands
-
-Here are a few more parameters and commands you may need to run the container:
-
-#### Usage records
-
-When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST API endpoint to generate a report about service usage.
-
-#### Arguments for storing logs
-
-When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
-
- **Example `docker run` command**
-
-```docker
-docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
-```
-#### Environment variable names in Kubernetes deployments
-
-Some Azure AI Containers, for example Translator, require users to pass environmental variable names that include colons (`:`) when running the container. This will work fine when using Docker, but Kubernetes does not accept colons in environmental variable names.
-To resolve this, you can replace colons with two underscore characters (`__`) when deploying to Kubernetes. See the following example of an acceptable format for environmental variable names:
-
-```Kubernetes
- env:
- - name: Mounts__License
- value: "/license"
- - name: Mounts__Output
- value: "/output"
-```
-
-This example replaces the default format for the `Mounts:License` and `Mounts:Output` environment variable names in the docker run command.
-
-#### Get records using the container endpoints
-
-The container provides two endpoints for returning records regarding its usage.
-
-#### Get all records
-
-The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
-
-```HTTP
-https://<service>/records/usage-logs/
-```
-
- **Example HTTPS endpoint**
-
- `http://localhost:5000/records/usage-logs`
-
-The usage-logs endpoint returns a JSON response similar to the following example:
-
-```json
-{
-"apiType": "string",
-"serviceName": "string",
-"meters": [
-{
- "name": "string",
- "quantity": 256345435
- }
- ]
-}
-```
-
-#### Get records for a specific month
-
-The following endpoint provides a report summarizing usage over a specific month and year:
-
-```HTTP
-https://<service>/records/usage-logs/{MONTH}/{YEAR}
-```
-
-This usage-logs endpoint returns a JSON response similar to the following example:
-
-```json
-{
- "apiType": "string",
- "serviceName": "string",
- "meters": [
- {
- "name": "string",
- "quantity": 56097
- }
- ]
-}
-```
-
-### Purchase a different commitment plan for disconnected containers
-
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-
-You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section.
-
-### End a commitment plan
-
- If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're still able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you do so, you avoid charges for the following year.
-
-## Troubleshooting
-
-Run the container with an output mount and logging enabled. These settings enable the container to generate log files that are helpful for troubleshooting issues that occur while starting or running the container.
-
-> [!TIP]
-> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../containers/disconnected-container-faq.yml).
-
-That's it! You've learned how to create and run disconnected containers for Azure AI Translator Service.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Request parameters for Translator text containers](translator-container-supported-parameters.md)
ai-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md
- Title: Install and run Docker containers for Translator API-
-description: Use the Docker container for Translator API to translate text.
-#
---- Previously updated : 07/18/2023-
-recommendations: false
-keywords: on-premises, Docker, container, identify
--
-# Install and run Translator containers
-
-Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you learn how to download, install, and run a Translator container.
-
-Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
-
-See the list of [languages supported](../language-support.md) when using Translator containers.
-
-> [!IMPORTANT]
->
-> * To use the Translator container, you must submit an online request and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
-> * Translator container supports limited features compared to the cloud offerings. For more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
-
-<!-- markdownlint-disable MD033 -->
-
-## Prerequisites
-
-To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-You also need:
-
-| Required | Purpose |
-|--|--|
-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
-| Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) regional resource (not `global`) with an associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
-
-|Optional|Purpose|
-||-|
-|Azure CLI (command-line interface) |<ul><li> The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell.</li></ul> |
-
-## Required elements
-
-All Azure AI containers require three primary elements:
-
-* **EULA accept setting**. An end-user license agreement (EULA) set with a value of `Eula=accept`.
-
-* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource **Keys and Endpoint** page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
-
-> [!IMPORTANT]
->
-> * Keys are used to access your Azure AI resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
-
-## Host computer
--
-## Container requirements and recommendations
-
-The following table describes the minimum and recommended CPU cores and memory to allocate for the Translator container.
-
-| Container | Minimum |Recommended | Language Pair |
-|--|||-|
-| Translator |`2` cores, `4 GB` memory |`4` cores, `8 GB` memory | 2 |
-
-* Each core must be at least 2.6 gigahertz (GHz) or faster.
-
-* The core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-> [!NOTE]
->
-> * CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the docker run command.
->
-> * The minimum and recommended specifications are based on Docker limits, not host machine resources.
-
-## Request approval to run container
-
-Complete and submit the [**Azure AI services
-Application for Gated Services**](https://aka.ms/csgate-translator) to request access to the container.
---
-## Translator container image
-
-The Translator container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`.
-
-To use the latest version of the container, you can use the `latest` tag. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags).
-
-## Get container images with **docker commands**
-
-> [!IMPORTANT]
->
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `EULA`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start.
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to download a container image from Microsoft Container registry and run it.
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
--v /mnt/d/TranslatorContainer:/usr/local/models \--e apikey={API_KEY} \--e eula=accept \--e billing={ENDPOINT_URI} \--e Languages=en,fr,es,ar,ru \
-mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
-```
-
-The above command:
-
-* Downloads and runs a Translator container from the container image.
-* Allocates 12 gigabytes (GB) of memory and four CPU core.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Accepts the end-user agreement (EULA)
-* Configures billing endpoint
-* Downloads translation models for languages English, French, Spanish, Arabic, and Russian
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-### Run multiple containers on the same host
-
-If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-
-You can have this container and a different Azure AI container running on the HOST together. You also can have multiple containers of the same Azure AI container running.
-
-## Query the container's Translator endpoint
-
- The container provides a REST-based Translator endpoint API. Here's an example request:
-
-```curl
-curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS"
- -H "Content-Type: application/json" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-> [!NOTE]
-> If you attempt the cURL POST request before the container is ready, you'll end up getting a *Service is temporarily unavailable* response. Wait until the container is ready, then try again.
-
-## Stop the container
--
-## Troubleshoot
-
-### Validate that a container is running
-
-There are several ways to validate that the container is running:
-
-* The container provides a homepage at `/` as a visual validation that the container is running.
-
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
-
-| Request URL | Purpose |
-|--|--|
-| `http://localhost:5000/` | The container provides a home page. |
-| `http://localhost:5000/ready` | Requested with GET. Provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://localhost:5000/status` | Requested with GET. Verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
---
-## Text translation code samples
-
-### Translate text with swagger
-
-#### English &leftrightarrow; German
-
-Navigate to the swagger page: `http://localhost:5000/swagger/https://docsupdatetracker.net/index.html`
-
-1. Select **POST /translate**
-1. Select **Try it out**
-1. Enter the **From** parameter as `en`
-1. Enter the **To** parameter as `de`
-1. Enter the **api-version** parameter as `3.0`
-1. Under **texts**, replace `string` with the following JSON
-
-```json
- [
- {
- "text": "hello, how are you"
- }
- ]
-```
-
-Select **Execute**, the resulting translations are output in the **Response Body**. You should expect something similar to the following response:
-
-```json
-"translations": [
- {
- "text": "hallo, wie geht es dir",
- "to": "de"
- }
- ]
-```
-
-### Translate text with Python
-
-```python
-import requests, json
-
-url = 'http://localhost:5000/translate?api-version=3.0&from=en&to=fr'
-headers = { 'Content-Type': 'application/json' }
-body = [{ 'text': 'Hello, how are you' }]
-
-request = requests.post(url, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(
- response,
- sort_keys=True,
- indent=4,
- ensure_ascii=False,
- separators=(',', ': ')))
-```
-
-### Translate text with C#/.NET console app
-
-Launch Visual Studio, and create a new console application. Edit the `*.csproj` file to add the `<LangVersion>7.1</LangVersion>` nodeΓÇöspecifies C# 7.1. Add the [Newtoonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) NuGet package, version 11.0.2.
-
-In the `Program.cs` replace all the existing code with the following script:
-
-```csharp
-using Newtonsoft.Json;
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-
-namespace TranslateContainer
-{
- class Program
- {
- const string ApiHostEndpoint = "http://localhost:5000";
- const string TranslateApi = "/translate?api-version=3.0&from=en&to=de";
-
- static async Task Main(string[] args)
- {
- var textToTranslate = "Sunny day in Seattle";
- var result = await TranslateTextAsync(textToTranslate);
-
- Console.WriteLine(result);
- Console.ReadLine();
- }
-
- static async Task<string> TranslateTextAsync(string textToTranslate)
- {
- var body = new object[] { new { Text = textToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- var client = new HttpClient();
- using (var request =
- new HttpRequestMessage
- {
- Method = HttpMethod.Post,
- RequestUri = new Uri($"{ApiHostEndpoint}{TranslateApi}"),
- Content = new StringContent(requestBody, Encoding.UTF8, "application/json")
- })
- {
- // Send the request and await a response.
- var response = await client.SendAsync(request);
-
- return await response.Content.ReadAsStringAsync();
- }
- }
- }
-}
-```
-
-## Summary
-
-In this article, you learned concepts and workflows for downloading, installing, and running Translator container. Now you know:
-
-* Translator provides Linux containers for Docker.
-* Container images are downloaded from the container registry and run in Docker.
-* You can use the REST API to call 'translate' operation in Translator container by specifying the container's host URI.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Azure AI containers](../../cognitive-services-container-support.md?context=%2fazure%2fcognitive-services%2ftranslator%2fcontext%2fcontext)
ai-services Transliterate Text Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/transliterate-text-parameters.md
+
+ Title: "Container: Transliterate document method"
+
+description: Understand the parameters, headers, and body messages for the Azure AI Translator container transliterate text operation.
+#
+++++ Last updated : 04/08/2024+++
+# Container: Transliterate Text
+
+Convert characters or letters of a source language to the corresponding characters or letters of a target language.
+
+## Request URL
+
+`POST` request:
+
+```HTTP
+ POST {Endpoint}/transliterate?api-version=3.0&language={language}&fromScript={fromScript}&toScript={toScript}
+
+```
+
+*See* [**Virtual Network Support**](../reference/v3-0-reference.md#virtual-network-support) for Translator service selected network and private endpoint configuration and support.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+| Query parameter | Description |Condition|
+| | | |
+| api-version |Version of the API requested by the client. Value must be `3.0`. |*Required parameter*|
+| language |Specifies the source language of the text to convert from one script to another.| *Required parameter*|
+| fromScript | Specifies the script used by the input text. |*Required parameter*|
+| toScript |Specifies the output script.|*Required parameter*|
+
+* You can query the service for `transliteration` scope [supported languages](../reference/v3-0-languages.md).
+* *See also* [Language support for transliteration](../language-support.md#transliteration).
+
+## Request headers
+
+| Headers | Description |Condition|
+| | | |
+| Authentication headers | *See* [available options for authentication](../reference/v3-0-reference.md#authentication)|*Required request header*|
+| Content-Type | Specifies the content type of the payload. Possible value: `application/json` |*Required request header*|
+| Content-Length |The length of the request body. |*Optional*|
+| X-ClientTraceId |A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |*Optional*|
+
+## Response body
+
+A successful response is a JSON array with one result for each element in the input array. A result object includes the following properties:
+
+* `text`: A string that results from converting the input string to the output script.
+
+* `script`: A string specifying the script used in the output.
+
+## Response headers
+
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request. It can be used for troubleshooting purposes. |
+
+### Sample request
+
+```http
+https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0&language=ja&fromScript=Jpan&toScript=Latn
+```
+
+### Sample request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to convert.
+
+```json
+[
+ {"Text":"πüôπéôπü½πüíπü»"},
+ {"Text":"さようなら"}
+]
+```
+
+The following limitations apply:
+
+* The array can have a maximum of 10 elements.
+* The text value of an array element can't exceed 1,000 characters including spaces.
+* The entire text included in the request can't exceed 5,000 characters including spaces.
+
+### Sample JSON response:
+
+```json
+[
+ {
+ "text": "Kon'nichiwaΓÇï",
+ "script": "Latn"
+ },
+ {
+ "text": "sayonara",
+ "script": "Latn"
+ }
+]
+```
+
+## Code samples: transliterate text
+
+> [!NOTE]
+>
+> * Each sample runs on the `localhost` that you specified with the `docker run` command.
+> * While your container is running, `localhost` points to the container itself.
+> * You don't have to use `localhost:5000`. You can use any port that is not already in use in your host environment.
+> To specify a port, use the `-p` option.
+
+### Transliterate with REST API
+
+```rest
+
+ POST https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0&language=ja&fromScript=Jpan&toScript=Latn HTTP/1.1
+ Ocp-Apim-Subscription-Key: ba6c4278a6c0412da1d8015ef9930d44
+ Content-Type: application/json
+
+ [
+ {"Text":"πüôπéôπü½πüíπü»"},
+ {"Text":"さようなら"}
+ ]
+```
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Learn more about text transliteration](../translator-text-apis.md#transliterate-text)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/faq.md
Title: Frequently asked questions - Document Translation
-description: Get answers to frequently asked questions about Document Translation.
+description: Get answers to Document Translation frequently asked questions.
# Previously updated : 11/30/2023 Last updated : 03/11/2024
If the language of the content in the source document is known, we recommend tha
#### To what extent are the layout, structure, and formatting maintained?
-When text is translated from the source to target language, the overall length of translated text can differ from source. The result could be reflow of text across pages. The same fonts aren't always available in both source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
+When text is translated from the source to target language, the overall length of translated text can differ from source. The result could be reflow of text across pages. The same fonts aren't always available in both source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
#### Will the text in an image within a document gets translated?
-No. The text in an image within a document isn't translated.
+&#8203;No. The text in an image within a document isn't translated.
#### Can Document Translation translate content from scanned documents?
Yes. Document Translation translates content from _scanned PDF_ documents.
#### Can encrypted or password-protected documents be translated?
-No. The service can't translate encrypted or password-protected documents. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
+&#8203;No. The service can't translate encrypted or password-protected documents. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
#### If I'm using managed identities, do I also need a SAS token URL?
-No. Don't include SAS token-appended URLS. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
+&#8203;No. Don't include SAS token-appended URLs. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
#### Which PDF format renders the best results?
ai-studio Develop In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop-in-vscode.md
Last updated 1/10/2024 --++ # Get started with Azure AI projects in VS Code
ai-studio Generate Data Qa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/generate-data-qa.md
In this article, you learn how to get question and answer pairs from your source
## Install the Synthetics Package ```shell
-python --version # ensure you've >=3.8
+python --version # use version 3.8 or later
pip3 install azure-identity azure-ai-generative pip3 install wikipedia langchain nltk unstructured ```
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
- ignite-2023 Previously updated : 2/24/2024 Last updated : 4/5/2024
You must have:
- An Azure AI project - An Azure AI Search resource
-## Create an index
+## Create an index from the Indexes tab
1. Sign in to [Azure AI Studio](https://ai.azure.com). 1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
You must have:
:::image type="content" source="../media/index-retrieve/project-left-menu.png" alt-text="Screenshot of Project Left Menu." lightbox="../media/index-retrieve/project-left-menu.png"::: 1. Select **+ New index**
-1. Choose your **Source data**. You can choose source data from a list of your recent data sources, a storage URL on the cloud or even upload files and folders from the local machine. You can also add a connection to another data source such as Azure Blob Storage.
+1. Choose your **Source data**. You can choose source data from a list of your recent data sources, a storage URL on the cloud, or upload files and folders from the local machine. You can also add a connection to another data source such as Azure Blob Storage.
:::image type="content" source="../media/index-retrieve/select-source-data.png" alt-text="Screenshot of select source data." lightbox="../media/index-retrieve/select-source-data.png":::
You must have:
1. Select **Next** after choosing index storage 1. Configure your **Search Settings**
- 1. The search type defaults to **Hybrid + Semantic**, which is a combination of keyword search, vector search and semantic search to give the best possible search results.
- 1. For the hybrid option to work, you need an embedding model. Choose the Azure OpenAI resource, which has the embedding model
+ 1. The ***Vector settings*** defaults to true for Add vector search to this search resource. As noted, this enables Hybrid and Hybrid + Semantic search options. Disabling this limits vector search options to Keyword and Semantic.
+ 1. For the hybrid option to work, you need an embedding model. Choose an embedding model from the dropdown.
1. Select the acknowledgment to deploy an embedding model if it doesn't already exist in your resource
-
+ :::image type="content" source="../media/index-retrieve/search-settings.png" alt-text="Screenshot of configure search settings." lightbox="../media/index-retrieve/search-settings.png":::
+
+ If a non-Azure OpenAI model isn't appearing in the dropdown follow these steps:
+ 1. Navigate to the Project settings in [Azure AI Studio](https://ai.azure.com).
+ 1. Navigate to connections section in the settings tab and select New connection.
+ 1. Select **Serverless Model**.
+ 1. Type in the name of your embedding model deployment and select Add connection. If the model doesn't appear in the dropdown, select the **Enter manually** option.
+ 1. Enter the deployment API endpoint, model name, and API key in the corresponding fields. Then add connection.
+ 1. The embedding model should now appear in the dropdown.
+
+ :::image type="content" source="../media/index-retrieve/serverless-connection.png" alt-text="Screenshot of connect a serverless model." lightbox="../media/index-retrieve/serverless-connection.png":::
-1. Use the prefilled name or type your own name for New Vector index name
1. Select **Next** after configuring search settings 1. In the **Index settings** 1. Enter a name for your index or use the autopopulated name
+ 1. Schedule updates. You can choose to update the index hourly or daily.
1. Choose the compute where you want to run the jobs to create the index. You can - Auto select to allow Azure AI to choose an appropriate VM size that is available - Choose a VM size from a list of recommended options
You must have:
1. Select **Next** after configuring index settings 1. Review the details you entered and select **Create**
-
- > [!NOTE]
- > If you see a **DeploymentNotFound** error, you need to assign more permissions. See [mitigate DeploymentNotFound error](#mitigate-deploymentnotfound-error) for more details.
- 1. You're taken to the index details page where you can see the status of your index creation.
+## Create an index from the Playground
+1. Open your AI Studio project.
+1. Navigate to the Playground tab.
+1. The Select available project index is displayed for existing indexes in the project. If an existing index isn't being used, continue to the next steps.
+1. Select the Add your data dropdown.
+
+ :::image type="content" source="../media/index-retrieve/add-data-dropdown.png" alt-text="Screenshot of the playground add your data dropdown." lightbox="../media/index-retrieve/add-data-dropdown.png":::
-### Mitigate DeploymentNotFound error
-
-When you try to create a vector index, you might see the following error at the **Review + Finish** step:
-
-**Failed to create vector index. DeploymentNotFound: A valid deployment for the model=text-embedding-ada-002 was not found in the workspace connection=Default_AzureOpenAI provided.**
-
-This can happen if you are trying to create an index using an **Owner**, **Contributor**, or **Azure AI Developer** role at the project level. To mitigate this error, you might need to assign more permissions using either of the following methods.
-
-> [!NOTE]
-> You need to be assigned the **Owner** role of the resource group or higher scope (like Subscription) to perform the operation in the next steps. This is because only the Owner role can assign roles to others. See details [here](/azure/role-based-access-control/built-in-roles).
-
-#### Method 1: Assign more permissions to the user on the Azure AI hub resource
-
-If the Azure AI hub resource the project uses was created through Azure AI Studio:
-1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**.
-1. Select **AI project settings** from the collapsible left menu.
-1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal.
-1. In the Azure portal under **Overview** > **Resources** select the Azure AI service type. It's named similar to "YourAzureAIResourceName-aiservices."
-
- :::image type="content" source="../media/roles-access/resource-group-azure-ai-service.png" alt-text="Screenshot of Azure AI service in a resource group." lightbox="../media/roles-access/resource-group-azure-ai-service.png":::
-
-1. Select **Access control (IAM)** > **+ Add** to add a role assignment.
-1. Add the **Cognitive Services OpenAI User** role to the user who wants to make an index. `Cognitive Services OpenAI Contributor` and `Cognitive Services Contributor` also work, but they assign more permissions than needed for creating an index in Azure AI Studio.
-
-> [!NOTE]
-> You can also opt to assign more permissions [on the resource group](#method-2-assign-more-permissions-on-the-resource-group). However, that method assigns more permissions than needed to mitigate the **DeploymentNotFound** error.
-
-#### Method 2: Assign more permissions on the resource group
+1. If a new index is being created, select the ***Add your data*** option. Then follow the steps from ***Create an index from the Indexes tab*** to navigate through the wizard to create an index.
+ 1. If there's an external index that is being used, select the ***Connect external index*** option.
+ 1. In the **Index Source**
+ 1. Select your data source
+ 1. Select your AI Search Service
+ 1. Select the index to be used.
-If the Azure AI hub resource the project uses was created through Azure portal:
-1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**.
-1. Select **AI project settings** from the collapsible left menu.
-1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal.
-1. Select **Access control (IAM)** > **+ Add** to add a role assignment.
-1. Add the **Cognitive Services OpenAI User** role to the user who wants to make an index. `Cognitive Services OpenAI Contributor` and `Cognitive Services Contributor` also work, but they assign more permissions than needed for creating an index in Azure AI Studio.
+ :::image type="content" source="../media/index-retrieve/connect-external-index.png" alt-text="Screenshot of the page where you select an index." lightbox="../media/index-retrieve/connect-external-index.png":::
+
+ 1. Select **Next** after configuring search settings.
+ 1. In the **Index settings**
+ 1. Enter a name for your index or use the autopopulated name
+ 1. Schedule updates. You can choose to update the index hourly or daily.
+ 1. Choose the compute where you want to run the jobs to create the index. You can
+ - Auto select to allow Azure AI to choose an appropriate VM size that is available
+ - Choose a VM size from a list of recommended options
+ - Choose a VM size from a list of all possible options
+ 1. Review the details you entered and select **Create.**
+ 1. The index is now ready to be used in the Playground.
## Use an index in prompt flow
If the Azure AI hub resource the project uses was created through Azure portal:
1. Provide a name for your Index Lookup Tool and select **Add**. 1. Select the **mlindex_content** value box, and select your index. After completing this step, enter the queries and **query_types** to be performed against the index.
- :::image type="content" source="../media/index-retrieve/configure-index-lookup-tool.png" alt-text="Screenshot of Configure Index Lookup." lightbox="../media/index-retrieve/configure-index-lookup-tool.png":::
+ :::image type="content" source="../media/index-retrieve/configure-index-lookup-tool.png" alt-text="Screenshot of the prompt flow node to configure index lookup." lightbox="../media/index-retrieve/configure-index-lookup-tool.png":::
+ ## Next steps -- [Learn more about RAG](../concepts/retrieval-augmented-generation.md)
+- [Learn more about RAG](../concepts/retrieval-augmented-generation.md)
ai-studio Azure Open Ai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
Last updated 2/26/2024
- # Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Studio
The prompt flow *Azure OpenAI GPT-4 Turbo with Vision* tool enables you to use y
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI hub resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
+- An [Azure AI hub resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in [one of the regions that support GPT-4 Turbo with Vision](../../../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
## Build with the Azure OpenAI GPT-4 Turbo with Vision tool
ai-studio Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md
Title: LLM tool for flows in Azure AI Studio
-description: This article introduces the LLM tool for flows in Azure AI Studio.
+description: This article introduces you to the large language model (LLM) tool for flows in Azure AI Studio.
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
-The prompt flow *LLM* tool enables you to use large language models (LLM) for natural language processing.
+To use large language models (LLMs) for natural language processing, you use the prompt flow LLM tool.
> [!NOTE] > For embeddings to convert text into dense vector representations for various natural language processing tasks, see [Embedding tool](embedding-tool.md). ## Prerequisites
-Prepare a prompt as described in the [prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
+Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
## Build with the LLM tool 1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md). 1. Select **+ LLM** to add the LLM tool to your flow.
- :::image type="content" source="../../media/prompt-flow/llm-tool.png" alt-text="Screenshot of the LLM tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/llm-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/llm-tool.png" alt-text="Screenshot that shows the LLM tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/llm-tool.png":::
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
-1. From the **Api** drop-down list, select *chat* or *completion*.
-1. Enter values for the LLM tool input parameters described [here](#inputs). If you selected the *chat* API, see [chat inputs](#chat-inputs). If you selected the *completion* API, see [text completion inputs](#text-completion-inputs). For information about how to prepare the prompt input, see [prerequisites](#prerequisites).
-1. Add more tools to your flow as needed, or select **Run** to run the flow.
-1. The outputs are described [here](#outputs).
-
+1. From the **Api** dropdown list, select **chat** or **completion**.
+1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
+1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
+1. The outputs are described in the [Outputs table](#outputs).
## Inputs
-The following are available input parameters:
+The following input parameters are available.
### Text completion inputs | Name | Type | Description | Required | ||-|--|-|
-| prompt | string | text prompt for the language model | Yes |
-| model, deployment_name | string | the language model to use | Yes |
-| max\_tokens | integer | the maximum number of tokens to generate in the completion. Default is 16. | No |
-| temperature | float | the randomness of the generated text. Default is 1. | No |
-| stop | list | the stopping sequence for the generated text. Default is null. | No |
-| suffix | string | text appended to the end of the completion | No |
-| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
-| logprobs | integer | the number of log probabilities to generate. Default is null. | No |
-| echo | boolean | value that indicates whether to echo back the prompt in the response. Default is false. | No |
-| presence\_penalty | float | value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
-| frequency\_penalty | float | value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
-| best\_of | integer | the number of best completions to generate. Default is 1. | No |
-| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
-
+| prompt | string | Text prompt for the language model. | Yes |
+| model, deployment_name | string | The language model to use. | Yes |
+| max\_tokens | integer | The maximum number of tokens to generate in the completion. Default is 16. | No |
+| temperature | float | The randomness of the generated text. Default is 1. | No |
+| stop | list | The stopping sequence for the generated text. Default is null. | No |
+| suffix | string | The text appended to the end of the completion. | No |
+| top_p | float | The probability of using the top choice from the generated tokens. Default is 1. | No |
+| logprobs | integer | The number of log probabilities to generate. Default is null. | No |
+| echo | boolean | The value that indicates whether to echo back the prompt in the response. Default is false. | No |
+| presence\_penalty | float | The value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
+| frequency\_penalty | float | The value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
+| best\_of | integer | The number of best completions to generate. Default is 1. | No |
+| logit\_bias | dictionary | The logit bias for the language model. Default is empty dictionary. | No |
### Chat inputs | Name | Type | Description | Required | ||-||-|
-| prompt | string | text prompt that the language model should reply to | Yes |
-| model, deployment_name | string | the language model to use | Yes |
-| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is inf. | No |
-| temperature | float | the randomness of the generated text. Default is 1. | No |
-| stop | list | the stopping sequence for the generated text. Default is null. | No |
-| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
-| presence\_penalty | float | value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
-| frequency\_penalty | float | value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
-| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
+| prompt | string | The text prompt that the language model should reply to. | Yes |
+| model, deployment_name | string | The language model to use. | Yes |
+| max\_tokens | integer | The maximum number of tokens to generate in the response. Default is inf. | No |
+| temperature | float | The randomness of the generated text. Default is 1. | No |
+| stop | list | The stopping sequence for the generated text. Default is null. | No |
+| top_p | float | The probability of using the top choice from the generated tokens. Default is 1. | No |
+| presence\_penalty | float | The value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
+| frequency\_penalty | float | The value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
+| logit\_bias | dictionary | The logit bias for the language model. Default is empty dictionary. | No |
## Outputs The output varies depending on the API you selected for inputs.
-| API | Return Type | Description |
+| API | Return type | Description |
||-||
-| Completion | string | The text of one predicted completion |
-| Chat | string | The text of one response of conversation |
+| Completion | string | The text of one predicted completion. |
+| Chat | string | The text of one response of conversation. |
## Next steps
ai-studio Prompt Flow Tools Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
description: Learn about prompt flow tools that are available in Azure AI Studio
Previously updated : 2/6/2024 Last updated : 4/5/2024
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
-The following table provides an index of tools in prompt flow.
+The following table provides an index of tools in prompt flow.
-| Tool (set) name | Description | Environment | Package name |
+| Tool name | Description | Package name |
||--|-|--|
-| [LLM](./llm-tool.md) | Use Azure OpenAI large language models (LLM) for tasks such as text completion or chat. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Python](./python-tool.md) | Run Python code. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Content Safety (Text)](./content-safety-tool.md) | Use Azure AI Content Safety to detect harmful content. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Index Lookup*](./index-lookup-tool.md) | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector Index Lookup*](./vector-index-lookup-tool.md) | Search text or a vector-based query from a vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Faiss Index Lookup*](./faiss-index-lookup-tool.md) | Search a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector DB Lookup*](./vector-db-lookup-tool.md) | Search a vector-based query from an existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Embedding](./embedding-tool.md) | Use Azure OpenAI embedding models to create an embedding vector that represents the input text. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Azure AI Language tools*](https://microsoft.github.io/promptflow/integrations/tools/azure-ai-language-tool.html) | This collection of tools is a wrapper for various Azure AI Language APIs, which can help effectively understand and analyze documents and conversations. The capabilities currently supported include: Abstractive Summarization, Extractive Summarization, Conversation Summarization, Entity Recognition, Key Phrase Extraction, Language Detection, PII Entity Recognition, Conversational PII, Sentiment Analysis, Conversational Language Understanding, Translator. You can learn how to use them by the [Sample flows](https://github.com/microsoft/promptflow/tree/e4542f6ff5d223d9800a3687a7cfd62531a9607c/examples/flows/integrations/azure-ai-language). Support contact: taincidents@microsoft.com | Custom | [promptflow-azure-ai-language](https://pypi.org/project/promptflow-azure-ai-language/) |
-
-_*The asterisk marks indicate custom tools, which are created by the community that extend prompt flow's capabilities for specific use cases. They aren't officially maintained or endorsed by prompt flow team. When you encounter questions or issues for these tools, please prioritize using the support contact if it is provided in the description._
-
-To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
-
-## Remarks
+| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Content Safety (Text)](./content-safety-tool.md) | Use Azure AI Content Safety to detect harmful content. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Embedding](./embedding-tool.md) | Use Azure OpenAI embedding models to create an embedding vector that represents the input text. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Index Lookup](./index-lookup-tool.md) | Search a vector-based query for relevant results using one or more text queries. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector Index Lookup](./vector-index-lookup-tool.md)<sup>1</sup> | Search text or a vector-based query from a vector index. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Faiss Index Lookup](./faiss-index-lookup-tool.md)<sup>1</sup> | Search a vector-based query from the Faiss index file. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector DB Lookup](./vector-db-lookup-tool.md)<sup>1</sup> For | Search a vector-based query from an existing vector database. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+
+<sup>1</sup> The Index Lookup tool replaces the three deprecated legacy index tools: Vector Index Lookup, Vector DB Lookup, and Faiss Index Lookup. If you have a flow that contains one of those tools, follow the [migration steps](./index-lookup-tool.md#how-to-migrate-from-legacy-tools-to-the-index-lookup-tool) to upgrade your flow.
+
+## Custom tools
+
+To discover more custom tools developed by the open-source community such as [Azure AI Language tools](https://pypi.org/project/promptflow-azure-ai-language/), see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
+ - If existing tools don't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html).-- To install the custom tools, if you're using the automatic runtime, you can readily install the publicly released package by adding the custom tool package name into the `requirements.txt` file in the flow folder. Then select the **Save and install** button to start installation. After completion, you can see the custom tools displayed in the tool list. In addition, if you want to use local or private feed package, please build an image first, then set up the runtime based on your image. To learn more, see [How to create and manage a runtime](../create-manage-runtime.md).
+- To install the custom tools, if you're using the automatic runtime, you can readily install the publicly released package by adding the custom tool package name in the `requirements.txt` file in the flow folder. Then select **Save and install** to start installation. After completion, the custom tools appear in the tool list. If you want to use a local or private feed package, build an image first, and then set up the runtime based on your image. To learn more, see [How to create and manage a runtime](../create-manage-runtime.md).
+
+ :::image type="content" source="../../media/prompt-flow/install-package-on-automatic-runtime.png" alt-text="Screenshot that shows how to install packages on automatic runtime."lightbox = "../../media/prompt-flow/install-package-on-automatic-runtime.png":::
## Next steps
ai-studio Prompt Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-tool.md
Title: Prompt tool for flows in Azure AI Studio
-description: This article introduces the Prompt tool for flows in Azure AI Studio.
+description: This article introduces you to the Prompt tool for flows in Azure AI Studio.
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
-The prompt flow *Prompt* tool offers a collection of textual templates that serve as a starting point for creating prompts. These templates, based on the [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) template engine, facilitate the definition of prompts. The tool proves useful when prompt tuning is required prior to feeding the prompts into the large language model (LLM) in prompt flow.
+The prompt flow Prompt tool offers a collection of textual templates that serve as a starting point for creating prompts. These templates, based on the [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) template engine, facilitate the definition of prompts. The tool proves useful when prompt tuning is required before the prompts are fed into the large language model (LLM) in the prompt flow.
## Prerequisites
-Prepare a prompt. The [LLM tool](llm-tool.md) and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates.
+Prepare a prompt. The [LLM tool](llm-tool.md) and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates.
-In this example, the prompt incorporates Jinja templating syntax to dynamically generate the welcome message and personalize it based on the user's name. It also presents a menu of options for the user to choose from. Depending on whether the user_name variable is provided, it either addresses the user by name or uses a generic greeting.
+In this example, the prompt incorporates Jinja templating syntax to dynamically generate the welcome message and personalize it based on the user's name. It also presents a menu of options for the user to choose from. Depending on whether the `user_name` variable is provided, it either addresses the user by name or uses a generic greeting.
```jinja Welcome to {{ website_name }}!
Please select an option from the menu below:
4. Contact customer support ```
-For more information and best practices, see [prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
+For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
## Build with the Prompt tool 1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md). 1. Select **+ Prompt** to add the Prompt tool to your flow.
- :::image type="content" source="../../media/prompt-flow/prompt-tool.png" alt-text="Screenshot of the Prompt tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/prompt-tool.png":::
-
-1. Enter values for the Prompt tool input parameters described [here](#inputs). For information about how to prepare the prompt input, see [prerequisites](#prerequisites).
-1. Add more tools (such as the [LLM tool](llm-tool.md)) to your flow as needed, or select **Run** to run the flow.
-1. The outputs are described [here](#outputs).
+ :::image type="content" source="../../media/prompt-flow/prompt-tool.png" alt-text="Screenshot that shows the Prompt tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/prompt-tool.png":::
+1. Enter values for the Prompt tool input parameters described in the [Inputs table](#inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
+1. Add more tools (such as the [LLM tool](llm-tool.md)) to your flow, as needed. Or select **Run** to run the flow.
+1. The outputs are described in the [Outputs table](#outputs).
## Inputs
-The following are available input parameters:
+The following input parameters are available.
| Name | Type | Description | Required | |--|--|-|-|
-| prompt | string | The prompt template in Jinja | Yes |
-| Inputs | - | List of variables of prompt template and its assignments | - |
+| prompt | string | The prompt template in Jinja. | Yes |
+| Inputs | - | The list of variables of a prompt template and its assignments. | - |
## Outputs ### Example 1
-Inputs
+Inputs:
-| Variable | Type | Sample Value |
+| Variable | Type | Sample value |
||--|--| | website_name | string | "Microsoft" | | user_name | string | "Jane" |
-Outputs
+Outputs:
``` Welcome to Microsoft! Hello, Jane! Please select an option from the menu below: 1. View your account 2. Update personal information 3. Browse available products 4. Contact customer support
Welcome to Microsoft! Hello, Jane! Please select an option from the menu below:
### Example 2
-Inputs
+Inputs:
-| Variable | Type | Sample Value |
+| Variable | Type | Sample value |
|--|--|-| | website_name | string | "Bing" | | user_name | string | " |
-Outputs
+Outputs:
``` Welcome to Bing! Hello there! Please select an option from the menu below: 1. View your account 2. Update personal information 3. Browse available products 4. Contact customer support
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
Title: Python tool for flows in Azure AI Studio
-description: This article introduces the Python tool for flows in Azure AI Studio.
+description: This article introduces you to the Python tool for flows in Azure AI Studio.
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
-The prompt flow *Python* tool offers customized code snippets as self-contained executable nodes. You can quickly create Python tools, edit code, and verify results.
+The prompt flow Python tool offers customized code snippets as self-contained executable nodes. You can quickly create Python tools, edit code, and verify results.
## Build with the Python tool 1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md). 1. Select **+ Python** to add the Python tool to your flow.
- :::image type="content" source="../../media/prompt-flow/python-tool.png" alt-text="Screenshot of the Python tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/python-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/python-tool.png" alt-text="Screenshot that shows the Python tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/python-tool.png":::
-1. Enter values for the Python tool input parameters described [here](#inputs). For example, in the **Code** input text box you can enter the following Python code:
+1. Enter values for the Python tool input parameters that are described in the [Inputs table](#inputs). For example, in the **Code** input text box, you can enter the following Python code:
```python from promptflow import tool
The prompt flow *Python* tool offers customized code snippets as self-contained
For more information, see [Python code input requirements](#python-code-input-requirements).
-1. Add more tools to your flow as needed, or select **Run** to run the flow.
-1. The outputs are described [here](#outputs). Given the previous example Python code input, if the input message is "world", the output is `hello world`.
-
+1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
+1. The outputs are described in the [Outputs table](#outputs). Based on the previous example Python code input, if the input message is "world," the output is `hello world`.
## Inputs
-The list of inputs will change based on the arguments of the tool function, after you save the code. Adding type to arguments and return values help the tool show the types properly.
+The list of inputs change based on the arguments of the tool function, after you save the code. Adding type to arguments and `return` values helps the tool show the types properly.
| Name | Type | Description | Required | |--|--|||
-| Code | string | Python code snippet | Yes |
-| Inputs | - | List of tool function parameters and its assignments | - |
-
+| Code | string | The Python code snippet. | Yes |
+| Inputs | - | The list of the tool function parameters and its assignments. | - |
## Outputs
-The output is the `return` value of the python tool function. For example, consider the following python tool function:
+The output is the `return` value of the Python tool function. For example, consider the following Python tool function:
```python from promptflow import tool
def my_python_tool(message: str) -> str:
return 'hello ' + message ```
-If the input message is "world", the output is `hello world`.
+If the input message is "world," the output is `hello world`.
### Types
If the input message is "world", the output is `hello world`.
| double | param: float | Double type | | list | param: list or param: List[T] | List type | | object | param: dict or param: Dict[K, V] | Object type |
-| Connection | param: CustomConnection | Connection type will be handled specially |
+| Connection | param: CustomConnection | Connection type is handled specially. |
+
+Parameters with `Connection` type annotation are treated as connection inputs, which means:
-Parameters with `Connection` type annotation will be treated as connection inputs, which means:
-- Prompt flow extension will show a selector to select the connection.-- During execution time, prompt flow will try to find the connection with the name same from parameter value passed in.
+- The prompt flow extension shows a selector to select the connection.
+- During execution time, the prompt flow tries to find the connection with the same name from the parameter value that was passed in.
-> [!Note]
-> `Union[...]` type annotation is only supported for connection type, for example, `param: Union[CustomConnection, OpenAIConnection]`.
+> [!NOTE]
+> The `Union[...]` type annotation is only supported for connection type. An example is `param: Union[CustomConnection, OpenAIConnection]`.
## Python code input requirements This section describes requirements of the Python code input for the Python tool. -- Python Tool Code should consist of a complete Python code, including any necessary module imports.-- Python Tool Code must contain a function decorated with `@tool` (tool function), serving as the entry point for execution. The `@tool` decorator should be applied only once within the snippet.-- Python tool function parameters must be assigned in 'Inputs' section
+- Python tool code should consist of a complete Python code, including any necessary module imports.
+- Python tool code must contain a function decorated with `@tool` (tool function), serving as the entry point for execution. The `@tool` decorator should be applied only once within the snippet.
+- Python tool function parameters must be assigned in the `Inputs` section.
- Python tool function shall have a return statement and value, which is the output of the tool. The following Python code is an example of best practices:
def my_python_tool(message: str) -> str:
return 'hello ' + message ```
-## Consume custom connection in the Python tool
+## Consume a custom connection in the Python tool
-If you're developing a python tool that requires calling external services with authentication, you can use the custom connection in prompt flow. It allows you to securely store the access key and then retrieve it in your python code.
+If you're developing a Python tool that requires calling external services with authentication, you can use the custom connection in a prompt flow. It allows you to securely store the access key and then retrieve it in your Python code.
### Create a custom connection
-Create a custom connection that stores all your LLM API KEY or other required credentials.
+Create a custom connection that stores all your large language model API key or other required credentials.
-1. Go to **AI project settings**, then select **New Connection**.
-1. Select **Custom** service. You can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
+1. Go to **AI project settings**. Then select **New Connection**.
+1. Select **Custom** service. You can define your connection name. You can add multiple key-value pairs to store your credentials and keys by selecting **Add key-value pairs**.
> [!NOTE]
- > Make sure at least one key-value pair is set as secret, otherwise the connection will not be created successfully. You can set one Key-Value pair as secret by **is secret** checked, which will be encrypted and stored in your key value.
-
- :::image type="content" source="../../media/prompt-flow/create-connection.png" alt-text="Screenshot that shows create connection in AI Studio." lightbox = "../../media/prompt-flow/create-connection.png":::
+ > Make sure at least one key-value pair is set as secret. Otherwise, the connection won't be created successfully. To set one key-value pair as secret, select **is secret** to encrypt and store your key value.
+ :::image type="content" source="../../media/prompt-flow/create-connection.png" alt-text="Screenshot that shows creating a connection in AI Studio." lightbox = "../../media/prompt-flow/create-connection.png":::
1. Add the following custom keys to the connection: - `azureml.flow.connection_type`: `Custom` - `azureml.flow.module`: `promptflow.connections`
- :::image type="content" source="../../media/prompt-flow/custom-connection-keys.png" alt-text="Screenshot that shows add extra meta to custom connection in AI Studio." lightbox = "../../media/prompt-flow/custom-connection-keys.png":::
-
-
+ :::image type="content" source="../../media/prompt-flow/custom-connection-keys.png" alt-text="Screenshot that shows adding extra information to a custom connection in AI Studio." lightbox = "../../media/prompt-flow/custom-connection-keys.png":::
-### Consume custom connection in Python
+### Consume a custom connection in Python
-To consume a custom connection in your python code, follow these steps:
+To consume a custom connection in your Python code:
-1. In the code section in your python node, import custom connection library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function.
-1. Parse the input to the input section, then select your target custom connection in the value dropdown.
+1. In the code section in your Python node, import the custom connection library `from promptflow.connections import CustomConnection`. Define an input parameter of the type `CustomConnection` in the tool function.
+1. Parse the input to the input section. Then select your target custom connection in the value dropdown list.
For example:
def my_python_tool(message: str, myconn: CustomConnection) -> str:
connection_key2_value = myconn.key2 ``` - ## Next steps - [Learn more about how to create a flow](../flow-develop.md)
ai-studio Multimodal Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md
Extra usage fees might apply for using GPT-4 Turbo with Vision and Azure AI Visi
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the [regions that support GPT-4 Turbo with Vision](../../ai-services/openai/concepts/models.md#gpt-4-and-gpt-4-turbo-preview-model-availability): Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your Azure AI project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the [regions that support GPT-4 Turbo with Vision](../../ai-services/openai/concepts/models.md#gpt-4-and-gpt-4-turbo-preview-model-availability). When you deploy from your Azure AI project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. ## Start a chat session to analyze images or video
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
Last updated 2/8/2024 --++ # Tutorial: Deploy a web app for chat on your data
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
Now that you have your evaluation dataset, you can evaluate your flow by followi
1. Select a model to use for evaluation. In this example, select **gpt-35-turbo-16k**. Then select **Next**. > [!NOTE]
- > Evaluation with AI-assisted metrics needs to call another GPT model to do the calculation. For best performance, use a GPT-4 or gpt-35-turbo-16k model. If you didn't previously deploy a GPT-4 or gpt-35-turbo-16k model, you can deploy another model by following the steps in [Deploy a chat model](#deploy-a-chat-model). Then return to this step and select the model you deployed.
- > The evaluation process may take up lots of tokens, so it's recommended to use a model which can support >=16k tokens.
+ > Evaluation with AI-assisted metrics needs to call another GPT model to do the calculation. For best performance, use a model that supports at least 16k tokens such as gpt-4-32k or gpt-35-turbo-16k model. If you didn't previously deploy such a model, you can deploy another model by following the steps in [Deploy a chat model](#deploy-a-chat-model). Then return to this step and select the model you deployed.
1. Select **Add new dataset**. Then select **Next**.
aks Aks Extension Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-vs-code.md
+
+ Title: Use the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+description: Learn how to the Azure Kubernetes Service (AKS) extension for Visual Studio Code to manage your Kubernetes clusters.
++ Last updated : 04/08/2024++++
+# Use the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+
+The Azure Kubernetes Service (AKS) extension for Visual Studio Code allows you to easily view and manage your AKS clusters from your development environment.
+
+## Features
+
+The Azure Kubernetes Service (AKS) extension for Visual Studio Code provides a rich set of features to help you manage your AKS clusters, including:
+
+* **Merge into Kubeconfig**: Merge your AKS cluster into your `kubeconfig` file to manage your cluster from the command line.
+* **Save Kubeconfig**: Save your AKS cluster configuration to a file.
+* **AKS Diagnostics**: View diagnostics information based on your cluster's backend telemetry for identity, security, networking, node health, and create, upgrade, delete, and scale issues.
+* **AKS Periscope**: Extract detailed diagnostic information and export it to an Azure storage account for further analysis.
+* **Install Azure Service Operator (ASO)**: Deploy the latest version of ASO and provision Azure resources within Kubernetes.
+* **Start or stop a cluster**: Start or stop your AKS cluster to save costs when you're not using it.
+
+For more information, see [AKS extension for Visual Studio Code features](https://code.visualstudio.com/docs/azure/aksextensions#_features).
+
+## Installation
+
+1. Open Visual Studio Code.
+2. In the **Extensions** view, search for **Azure Kubernetes Service**.
+3. Select the **Azure Kubernetes Service** extension and then select **Install**.
+
+For more information, see [Install the AKS extension for Visual Studio Code](https://code.visualstudio.com/docs/azure/aksextensions#_install-the-azure-kubernetes-services-extension).
+
+## Next steps
+
+To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations with AKS](./integrations.md).
+
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the
- All global Azure DNS zones integrated with the add-on have to be in the same resource group. - All private Azure DNS zones integrated with the add-on have to be in the same resource group. - Editing the ingress-nginx `ConfigMap` in the `app-routing-system` namespace isn't supported.
+- The following snippet annotations are blocked and will prevent an Ingress from being configured: `load_module`, `lua_package`, `_by_lua`, `location`, `root`, `proxy_pass`, `serviceaccount`, `{`, `}`, `'`.
## Enable application routing using Azure CLI
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
The following table includes parameters you can use to define a custom storage c
|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| |resourceGroup | Specify the resource group for the Azure Disks | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster|
-|DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] IOPS Capability (minimum: 2 IOPS/GiB) | 100~160000 | No | `500`|
-|DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
+|DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] or [Premium SSD v2][premiumv2_lrs_disks] IOPS Capability (minimum: 2 IOPS/GiB) | 100~160000 | No | `500`|
+|DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] or [Premium SSD v2][premiumv2_lrs_disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
|LogicalSectorSize | Logical sector size in bytes for ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`| |tags | Azure Disk [tags][azure-tags] | Tag format: `key1=val1,key2=val2` | No | ""| |diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest][disk-encryption] | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
kubectl delete -f azure-pvc.yaml
[disk-host-cache-setting]: ../virtual-machines/windows/premium-storage-performance.md#disk-caching [use-ultra-disks]: use-ultra-disks.md [ultra-ssd-disks]: ../virtual-machines/linux/disks-ultra-ssd.md
+[premiumv2_lrs_disks]: ../virtual-machines/disks-types.md#premium-ssd-v2
[azure-tags]: ../azure-resource-manager/management/tag-resources.md [disk-encryption]: ../virtual-machines/windows/disk-encryption.md [azure-disk-write-accelerator]: ../virtual-machines/windows/how-to-enable-write-accelerator.md
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Kubernetes application-based container offers can't be deployed on AKS for Azure
1. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, on the left side under **Categories** select **Containers**. :::image type="content" source="./media/deploy-marketplace/containers-inline.png" alt-text="Screenshot of Azure Marketplace offers in the Azure portal, with the container category on the left side highlighted." lightbox="./media/deploy-marketplace/containers.png":::-
+
> [!IMPORTANT]
- > The **Containers** category includes both Kubernetes applications and standalone container images. This walkthrough is specific to Kubernetes applications. If you find that the steps to deploy an offer differ in some way, you're most likely trying to deploy a container image-based offer instead of a Kubernetes application-based offer.
-
+ > The **Containers** category includes Kubernetes applications. This walkthrough is specific to Kubernetes applications.
1. You'll see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select **See more**. :::image type="content" source="./media/deploy-marketplace/see-more-inline.png" alt-text="Screenshot of Azure Marketplace K8s offers in the Azure portal. 'See More' is highlighted." lightbox="./media/deploy-marketplace/see-more.png":::
If you experience issues, see the [troubleshooting checklist for failed deployme
- Learn more about [exploring and analyzing costs][billing]. - Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)+ - Learn more about [deploying a Kubernetes application through an ARM template](/azure/aks/deploy-application-template) <!-- LINKS -->
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Title: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster recommendations: false description: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster-+ Previously updated : 01/16/2024 Last updated : 04/02/2024 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
The Open Liberty Operator simplifies the deployment and management of applicatio
For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
-This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operators, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-* You can use Azure Cloud Shell or a local terminal.
+## Prerequisites
-
-* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* Install the [Azure CLI](/cli/azure/install-azure-cli). If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+* Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+* When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+* Run [az version](/cli/azure/reference-index?#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az-upgrade). This article requires at least version 2.31.0 of Azure CLI.
+* Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
+* Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+* Install [Docker](https://docs.docker.com/get-docker/) for your OS.
+* Ensure [Git](https://git-scm.com) is installed.
+* Make sure you're assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
> [!NOTE] > You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed, with the exception of Docker. > > :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
-* If running the commands in this guide locally (instead of Azure Cloud Shell):
- * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
- * Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
- * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
- * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
-* Make sure you're assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
- ## Create a Liberty on AKS deployment using the portal The following steps guide you to create a Liberty runtime on AKS. After completing these steps, you have an Azure Container Registry and an Azure Kubernetes Service cluster for deploying your containerized application.
-1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type *IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service*. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section. If you prefer, you can go directly to the offer with this shortcut link: [https://aka.ms/liberty-aks](https://aka.ms/liberty-aks).
+1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type *IBM Liberty on AKS*. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section. If you prefer, you can go directly to the offer with this shortcut link: [https://aka.ms/liberty-aks](https://aka.ms/liberty-aks).
1. Select **Create**.
The following steps guide you to create a Liberty runtime on AKS. After completi
1. Create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. 1. Select *East US* as **Region**.
- Create environment variables in your shell for the resource group names for the cluster and the database.
-
- ### [Bash](#tab/in-bash)
-
- ```bash
- export RESOURCE_GROUP_NAME=<your-resource-group-name>
- export DB_RESOURCE_GROUP_NAME=<your-resource-group-name>
- ```
-
- ### [PowerShell](#tab/in-powershell)
-
- ```powershell
- $Env:RESOURCE_GROUP_NAME="<your-resource-group-name>"
- $Env:DB_RESOURCE_GROUP_NAME="<your-resource-group-name>"
- ```
-
-
+ 1. Create an environment variable in your shell for the resource group name for the cluster.
+
+ ### [Bash](#tab/in-bash)
+
+ ```bash
+ export RESOURCE_GROUP_NAME=<your-resource-group-name>
+ ```
+
+ ### [PowerShell](#tab/in-powershell)
+
+ ```powershell
+ $Env:RESOURCE_GROUP_NAME="<your-resource-group-name>"
+ ```
-1. Select **Next**, enter the **AKS** pane. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. The remaining values do not need to be changed from their default values.
+1. Select **Next**, enter the **AKS** pane. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. For our purposes, just keep all the defaults on this pane.
1. Select **Next**, enter the **Load Balancing** pane. Next to **Connect to Azure Application Gateway?** select **Yes**. This section lets you customize the following deployment options.
- 1. You can customize the **virtual network** and **subnet** into which the deployment will place the resources. The remaining values do not need to be changed from their default values.
+ 1. You can optionally customize the **virtual network** and **subnet** into which the deployment places the resources. The remaining values don't need to be changed from their default values.
1. You can provide the **TLS/SSL certificate** presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Don't go to production using a self-signed certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md). 1. You can select **Enable cookie based affinity**, also known as sticky sessions. We want sticky sessions enabled for this article, so ensure this option is selected.
The following steps guide you to create a Liberty runtime on AKS. After completi
## Capture selected information from the deployment
-If you navigated away from the **Deployment is in progress** page, the following steps will show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to the third step.
+If you navigated away from the **Deployment is in progress** page, the following steps show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, go to the newly created resource group and skip to the third step.
1. In the upper left of any portal page, select the hamburger menu and select **Resource groups**. 1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group.
If you navigated away from the **Deployment is in progress** page, the following
1. Save aside the values for **Login server**, **Registry name**, **Username**, and **password**. You may use the copy icon at the right of each field to copy the value of that field to the system clipboard. 1. Navigate again to the resource group into which you deployed the resources. 1. In the **Settings** section, select **Deployments**.
-1. Select the bottom-most deployment in the list. The **Deployment name** will match the publisher ID of the offer. It will contain the string `ibm`.
+1. Select the bottom-most deployment in the list. The **Deployment name** matches the publisher ID of the offer. It contains the string `ibm`.
1. In the left pane, select **Outputs**. 1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
If you navigated away from the **Deployment is in progress** page, the following
### [Bash](#tab/in-bash)
- Paste the value of `appDeploymentTemplateYaml` or `appDeploymentYaml` into a Bash shell, append `| grep secretName`, and execute. This command will output the Ingress TLS secret name, such as `- secretName: secret785e2c`. Save aside the value for `secretName` from the output.
+ Paste the value of `appDeploymentTemplateYaml` or `appDeploymentYaml` into a Bash shell, append `| grep secretName`, and execute. This command outputs the Ingress TLS secret name, such as `- secretName: secret785e2c`. Save aside the value for `secretName` from the output.
### [PowerShell](#tab/in-powershell)
- Paste the quoted string in `appDeploymentTemplateYaml` or `appDeploymentYaml` into a PowerShell, append `| ForEach-Object { [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($_)) } | Select-String "secretName"`, and execute. This command will output the Ingress TLS secret name, such as `- secretName: secret785e2c`. Save aside the value for `secretName` from the output.
+ Paste the quoted string in `appDeploymentTemplateYaml` or `appDeploymentYaml` into a PowerShell (excluding the `| base64` portion), append `| ForEach-Object { [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($_)) } | Select-String "secretName"`, and execute. This command outputs the Ingress TLS secret name, such as `- secretName: secret785e2c`. Save aside the value for `secretName` from the output.
-These values will be used later in this article. Note that several other useful commands are listed in the outputs.
+ These values are used later in this article. Several other useful commands are listed in the outputs.
-> [!NOTE]
-> You may notice a similar output named **appDeploymentYaml**. The difference between output *appDeploymentTemplateYaml* and *appDeploymentYaml* is:
-> * *appDeploymentTemplateYaml* is populated if and only if the deployment **does not include** an application.
-> * *appDeploymentYaml* is populated if and only if the deployment **does include** an application.
+ > [!NOTE]
+ > You may notice a similar output named **appDeploymentYaml**. The difference between output *appDeploymentTemplateYaml* and *appDeploymentYaml* is:
+ > * *appDeploymentTemplateYaml* is populated if and only if the deployment **does not include** an application.
+ > * *appDeploymentYaml* is populated if and only if the deployment **does include** an application.
## Create an Azure SQL Database [!INCLUDE [create-azure-sql-database](includes/jakartaee/create-azure-sql-database.md)]
-Now that the database and AKS cluster have been created, we can proceed to preparing AKS to host your Open Liberty application.
+1. Create an environment variable in your shell for the resource group name for the database.
+
+### [Bash](#tab/in-bash)
+
+```bash
+export DB_RESOURCE_GROUP_NAME=<db-resource-group>
+```
+
+### [PowerShell](#tab/in-powershell)
+
+```powershell
+$Env:DB_RESOURCE_GROUP_NAME="<db-resource-group>"
+```
+++
+Now that the database and AKS cluster are created, we can proceed to preparing AKS to host your Open Liberty application.
## Configure and deploy the sample application
Follow the steps in this section to deploy the sample application on the Liberty
Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
-There are a few samples in the repository. We'll use *java-app/*. Here's the file structure of the application.
+There are a few samples in the repository. We use *java-app/*. Here's the file structure of the application.
#### [Bash](#tab/in-bash)
git checkout 20240109
-If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you have checked out a tag.
+If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you checked out a tag.
``` java-app
In directory *liberty/config*, the *server.xml* file is used to configure the DB
### Build the project
-Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many variables from the environment. As part of the Maven build, these variables are used to populate values in the YAML files located in *src/main/aks*. You can do something similar for your application outside Maven if you prefer.
+Now that you gathered the necessary properties, you can build the application. The POM file for the project reads many variables from the environment. As part of the Maven build, these variables are used to populate values in the YAML files located in *src/main/aks*. You can do something similar for your application outside Maven if you prefer.
#### [Bash](#tab/in-bash) - ```bash cd $BASE_DIR/java-app
-# The following variables will be used for deployment file generation into target.
+# The following variables are used for deployment file generation into target.
export LOGIN_SERVER=<Azure-Container-Registry-Login-Server-URL> export REGISTRY_NAME=<Azure-Container-Registry-name> export USER_NAME=<Azure-Container-Registry-username>
mvn clean install
```powershell cd $env:BASE_DIR\java-app
-# The following variables will be used for deployment file generation into target.
-$Env:LOGIN_SERVER=<Azure-Container-Registry-Login-Server-URL>
-$Env:REGISTRY_NAME=<Azure-Container-Registry-name>
-$Env:USER_NAME=<Azure-Container-Registry-username>
-$Env:PASSWORD=<Azure-Container-Registry-password>
-$Env:DB_SERVER_NAME=<server-name>.database.windows.net
-$Env:DB_NAME=<database-name>
-$Env:DB_USER=<server-admin-login>@<server-name>
-$Env:DB_PASSWORD=<server-admin-password>
-$Env:INGRESS_TLS_SECRET=<ingress-TLS-secret-name>
+# The following variables are used for deployment file generation into target.
+$Env:LOGIN_SERVER="<Azure-Container-Registry-Login-Server-URL>"
+$Env:REGISTRY_NAME="<Azure-Container-Registry-name>"
+$Env:USER_NAME="<Azure-Container-Registry-username>"
+$Env:PASSWORD="<Azure-Container-Registry-password>"
+$Env:DB_SERVER_NAME="<server-name>.database.windows.net"
+$Env:DB_NAME="<database-name>"
+$Env:DB_USER="<server-admin-login>@<server-name>"
+$Env:DB_PASSWORD="<server-admin-password>"
+$Env:INGRESS_TLS_SECRET="<ingress-TLS-secret-name>"
mvn clean install ```
mvn clean install
You can now run and test the project locally before deploying to Azure. For convenience, we use the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin`, see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html). For your application, you can do something similar using any other mechanism, such as your local IDE. You can also consider using the `liberty:devc` option intended for development with containers. You can read more about `liberty:devc` in the [Liberty docs](https://openliberty.io/docs/latest/development-mode.html#_container_support_for_dev_mode).
-1. Start the application using `liberty:run`. `liberty:run` will also use the environment variables defined in the previous step.
+1. Start the application using `liberty:run`. `liberty:run` also uses the environment variables defined in the previous step.
#### [Bash](#tab/in-bash)
Use the following steps to deploy and test the application:
1. Connect to the AKS cluster.
- Paste the value of **cmdToConnectToCluster** into a Bash shell and execute.
+ Paste the value of **cmdToConnectToCluster** into a shell and execute.
1. Apply the DB secret.
Use the following steps to deploy and test the application:
- You'll see the output `secret/db-secret-sql created`.
+ You see the output `secret/db-secret-sql created`.
1. Apply the deployment file.
Use the following steps to deploy and test the application:
Copy the value of **ADDRESS** from the output, this is the frontend public IP address of the deployed Azure Application Gateway.
- 1. Go to `https://<ADDRESS>` to test the application. For your convenience, this shell command will create an environment variable whose value you can paste straight into the browser.
+ 1. Go to `https://<ADDRESS>` to test the application. For your convenience, this shell command creates an environment variable whose value you can paste straight into the browser.
#### [Bash](#tab/in-bash)
Use the following steps to deploy and test the application:
## Clean up resources
-To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, and all related resources.
+To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, database, and all related resources.
### [Bash](#tab/in-bash)
You can learn more from the following references:
* [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator) * [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)-
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
Title: "Deploy Quarkus on Azure Kubernetes Service" description: Shows how to quickly stand up Quarkus on Azure Kubernetes Service.-+
Instead of `quarkus dev`, you can accomplish the same thing with Maven by using
You may be asked if you want to send telemetry of your usage of Quarkus dev mode. If so, answer as you like.
-Quarkus dev mode enables live reload with background compilation. If you modify any aspect of your app source code and refresh your browser, you can see the changes. If there are any issues with compilation or deployment, an error page lets you know. Quarkus dev mode listens for a debugger on port 5005. If you want to wait for the debugger to attach before running, pass `-Dsuspend` on the command line. If you donΓÇÖt want the debugger at all, you can use `-Ddebug=false`.
+Quarkus dev mode enables live reload with background compilation. If you modify any aspect of your app source code and refresh your browser, you can see the changes. If there are any issues with compilation or deployment, an error page lets you know. Quarkus dev mode listens for a debugger on port 5005. If you want to wait for the debugger to attach before running, pass `-Dsuspend` on the command line. If you don't want the debugger at all, you can use `-Ddebug=false`.
The output should look like the following example:
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
Title: "Deploy WebLogic Server on Azure Kubernetes Service using the Azure portal" description: Shows how to quickly stand up WebLogic Server on Azure Kubernetes Service.-+ Last updated 02/09/2024
Use the following steps to build the image:
=> => naming to docker.io/library/model-in-image:WLS-v1 0.2s ```
-1. If you have successfully created the image, then it should now be in your local machineΓÇÖs Docker repository. You can verify the image creation by using the following command:
+1. If you have successfully created the image, then it should now be in your local machine's Docker repository. You can verify the image creation by using the following command:
```text docker images model-in-image:WLS-v1
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
# Terminate a long running operation on an Azure Kubernetes Service (AKS) cluster
-Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
+Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. You can get insight into the progress of any ongoing operation, such as create, upgrade, and scale, using any preview API version after `2024-01-02-preview` using the following az rest command:
+
+```azurecli-interactive
+export ResourceID="You cluster ResourceID"
+az rest --method get --url "https://management.azure.com$ResourceID/operations/latest?api-version=2024-01-02-preview"
+```
+
+This command provides you with a percentage that indicates how close the operation is to completion. You can use this method to get these insights for up to 50 of the latest operations on your cluster. The "percentComplete" attribute denotes the extent of completion for the ongoing operation, as shown in the following example:
+
+```azurecli-interactive
+"id": "/subscriptions/26fe00f8-9173-4872-9134-bb1d2e00343a/resourcegroups/testStatus/providers/Microsoft.ContainerService/managedClusters/contoso/operations/fc10e97d-b7a8-4a54-84de-397c45f322e1",
+ "name": "fc10e97d-b7a8-4a54-84de-397c45f322e1",
+ "percentComplete": 10,
+ "startTime": "2024-04-08T18:21:31Z",
+ "status": "InProgress"
+```
+
+While it's important to allow operations to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
AKS support for aborting long running operations is now generally available. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--|
-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 14, 2024 | Until 1.29 GA |
| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | | 1.28 | Aug 2023 | Sep 2023 | Nov 2023 | Nov 2024 | Until 1.32 GA| | 1.29 | Dec 2023 | Feb 2024 | Mar 2024 | | Until 1.33 GA |
+| 1.30 | Apr 2024 | May 2024 | Jun 2024 | | Until 1.34 GA |
*\* Indicates the version is designated for Long Term Support*
Note the following important changes before you upgrade to any of the available
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
-| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
| 1.26 | Azure policy 1.3.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.1<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0<br>azurefile-csi-driver 1.26.10<br>| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.26.10 |None | 1.27 | Azure policy 1.3.0<br>azuredisk-csi driver v1.28.5<br>azurefile-csi driver v1.28.7<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>Keda 2.11.2<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>azurefile-csi-driver 1.28.7<br>KMS 0.5.0<br>CSI Secret store driver 1.3.4-1<br>|Cilium 1.13.10-1<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.11.2<br>Cilium 1.13.10-1<br>azurefile-csi-driver 1.28.7<br>azuredisk-csi driver v1.28.5<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2|Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards. | 1.28 | Azure policy 1.3.0<br>azurefile-csi-driver 1.29.2<br>csi-node-driver-registrar v2.9.0<br>csi-livenessprobe 2.11.0<br>azuredisk-csi-linux v1.29.2<br>azuredisk-csi-windows v1.29.2<br>csi-provisioner v3.6.2<br>csi-attacher v4.5.0<br>csi-resizer v1.9.3<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>CSI Secret store driver 1.3.4-1<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.10-1<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.28.13| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.29.2<br>csi-resizer v1.9.3<br>csi-attacher v4.4.2<br>csi-provisioner v4.4.2<br>blob-csi v1.23.2<br>azurefile-csi driver v1.29.2<br>azuredisk-csi driver v1.29.2<br>csi-livenessprobe v2.11.0<br>csi-node-driver-registrar v2.9.0|None
New Supported Version List
Platform support policy is a reduced support plan for certain unsupported Kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
-Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
+Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.26 is considered platform support when v1.29 is the latest GA version. However, during the v1.30 GA release, v1.26 will then auto-upgrade to v1.27. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on Kubernetes upstream.
For information on how to upgrade your cluster, see:
[get-azaksversion]: /powershell/module/az.aks/get-azaksversion [aks-tracker]: release-tracker.md [fleet-multi-cluster-upgrade]: /azure/kubernetes-fleet/update-orchestration-
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| - | -- | -- | | audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple `audience` values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. Policy expressions are allowed. | No | | backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. Policy expressions aren't allowed. | No |
-| client-application-ids | Contains a list of acceptable client application IDs. If multiple `application-id` elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. If a client application ID isn't provided, one or more `audience` claims should be specified. Policy expressions aren't allowed. | No |
+| client-application-ids | Contains a list of acceptable client application IDs. If multiple `application-id` elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. If a client application ID isn't provided, one or more `audience` claims should be specified. Policy expressions aren't allowed. | Yes |
| required-claims | Contains a list of `claim` elements for claim values expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. Policy expressions are allowed. | No | ### claim attributes
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
This article shows you how to configure a custom container to run on Azure App S
::: zone pivot="container-windows"
-This guide provides key concepts and instructions for containerization of Windows apps in App Service. If you've never used Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first.
+This guide provides key concepts and instructions for containerization of Windows apps in App Service. New Azure App Service users should follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first.
::: zone-end ::: zone pivot="container-linux"
-This guide provides key concepts and instructions for containerization of Linux apps in App Service. If you've never used Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. There's also a [multi-container app quickstart](quickstart-multi-container.md) and [tutorial](tutorial-multi-container-app.md). For sidecar containers (preview), see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
+This guide provides key concepts and instructions for containerization of Linux apps in App Service. If are new to Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. There's also a [multi-container app quickstart](quickstart-multi-container.md) and [tutorial](tutorial-multi-container-app.md). For sidecar containers (preview), see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
::: zone-end
For *\<username>* and *\<password>*, supply the sign-in credentials for your pri
## Use managed identity to pull image from Azure Container Registry
-Use the following steps to configure your web app to pull from ACR using managed identity. The steps use system-assigned managed identity, but you can use user-assigned managed identity as well.
+Use the following steps to configure your web app to pull from Azure Container Registry (ACR) using managed identity. The steps use system-assigned managed identity, but you can use user-assigned managed identity as well.
1. Enable [the system-assigned managed identity](./overview-managed-identity.md) for the web app by using the [`az webapp identity assign`](/cli/azure/webapp/identity#az-webapp-identity-assign) command: ```azurecli-interactive az webapp identity assign --resource-group <group-name> --name <app-name> --query principalId --output tsv ```
- Replace `<app-name>` with the name you used in the previous step. The output of the command (filtered by the `--query` and `--output` arguments) is the service principal ID of the assigned identity, which you use shortly.
+ Replace `<app-name>` with the name you used in the previous step. The output of the command (filtered by the `--query` and `--output` arguments) is the service principal ID of the assigned identity.
1. Get the resource ID of your Azure Container Registry: ```azurecli-interactive az acr show --resource-group <group-name> --name <registry-name> --query id --output tsv
Use the following steps to configure your web app to pull from ACR using managed
- `<app-name>` with the name of your web app. >[!Tip] > If you are using PowerShell console to run the commands, you need to escape the strings in the `--generic-configurations` argument in this and the next step. For example: `--generic-configurations '{\"acrUseManagedIdentityCreds\": true'`
-1. (Optional) If your app uses a [user-assigned managed identity](overview-managed-identity.md#add-a-user-assigned-identity), make sure this is configured on the web app and then set the `acrUserManagedIdentityID` property to specify its client ID:
+1. (Optional) If your app uses a [user-assigned managed identity](overview-managed-identity.md#add-a-user-assigned-identity), make sure the identity is configured on the web app and then set the `acrUserManagedIdentityID` property to specify its client ID:
```azurecli-interactive az identity show --resource-group <group-name> --name <identity-name> --query clientId --output tsv
You're all set, and the web app now uses managed identity to pull from Azure Con
## Use an image from a network protected registry
-To connect and pull from a registry inside a virtual network or on-premises, your app must integrate with a virtual network. This is also needed for Azure Container Registry with private endpoint. When your network and DNS resolution is configured, you enable the routing of the image pull through the virtual network by configuring the `vnetImagePullEnabled` site setting:
+To connect and pull from a registry inside a virtual network or on-premises, your app must integrate with a virtual network (VNET). VNET integration is also needed for Azure Container Registry with private endpoint. When your network and DNS resolution is configured, you enable the routing of the image pull through the virtual network by configuring the `vnetImagePullEnabled` site setting:
```azurecli-interactive az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.vnetImagePullEnabled [true|false]
You can connect to your Windows container directly for diagnostic tasks by navig
- It functions separately from the graphical browser above it, which only shows the files in your [shared storage](#use-persistent-shared-storage). - In a scaled-out app, the SSH session is connected to one of the container instances. You can select a different instance from the **Instance** dropdown in the top Kudu menu.-- Any change you make to the container from within the SSH session does *not* persist when your app is restarted (except for changes in the shared storage), because it's not part of the Docker image. To persist your changes, such as registry settings and software installation, make them part of the Dockerfile.
+- Any change you make to the container from within the SSH session **doesn't** persist when your app is restarted (except for changes in the shared storage), because it's not part of the Docker image. To persist your changes, such as registry settings and software installation, make them part of the Dockerfile.
## Access diagnostic logs
App Service logs actions by the Docker host and activities from within the cont
There are several ways to access Docker logs: -- [In the Azure portal](#in-azure-portal)-- [From Kudu](#from-kudu)-- [With the Kudu API](#with-the-kudu-api)-- [Send logs to Azure monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
+- [Azure portal](#in-azure-portal)
+- [Kudu](#from-kudu)
+- [Kudu API](#with-the-kudu-api)
+- [Azure monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
### In Azure portal
Docker logs are displayed in the portal, in the **Container Settings** page of y
### From Kudu
-Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and select the **LogFiles** folder to see the individual log files. To download the entire **LogFiles** directory, select the **Download** icon to the left of the directory name. You can also access this folder using an FTP client.
+Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and select the **LogFiles** folder to see the individual log files. To download the entire **LogFiles** directory, select the **"Download"** icon to the left of the directory name. You can also access this folder using an FTP client.
In the SSH terminal, you can't access the `C:\home\LogFiles` folder by default because persistent shared storage isn't enabled. To enable this behavior in the console terminal, [enable persistent shared storage](#use-persistent-shared-storage).
To download all the logs together in one ZIP file, access `https://<app-name>.sc
## Customize container memory
-By default all Windows Containers deployed in Azure App Service have a memory limit configured. The following table lists the default settings per App Service Plan SKU.
+By default all Windows Containers deployed in Azure App Service have a memory limit configured. The following table lists the default settings per App Service Plan SKU.
| App Service Plan SKU | Default memory limit per app in MB | |-|-|
In PowerShell:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITE_MEMORY_LIMIT_MB"=2000} ```
-The value is defined in MB and must be less and equal to the total physical memory of the host. For example, in an App Service plan with 8GB RAM, the cumulative total of `WEBSITE_MEMORY_LIMIT_MB` for all the apps must not exceed 8 GB. Information on how much memory is available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section.
+The value is defined in MB and must be less and equal to the total physical memory of the host. For example, in an App Service plan with 8 GB RAM, the cumulative total of `WEBSITE_MEMORY_LIMIT_MB` for all the apps must not exceed 8 GB. Information on how much memory is available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section.
## Customize the number of compute cores
The processors might be multicore or hyperthreading processors. Information on h
## Customize health ping behavior
-App Service considers a container to be successfully started when the container starts and responds to an HTTP ping. The health ping request contains the header `User-Agent= "App Service Hyper-V Container Availability Check"`. If the container starts but doesn't respond to a ping after a certain amount of time, App Service logs an event in the Docker log, saying that the container didn't start.
+App Service considers a container to be successfully started when the container starts and responds to an HTTP ping. The health ping request contains the header `User-Agent= "App Service Hyper-V Container Availability Check"`. If the container starts but doesn't respond pings after a certain amount of time, App Service logs an event in the Docker log, saying that the container didn't start.
If your application is resource-intensive, the container might not respond to the HTTP ping in time. To control the actions when HTTP pings fail, set the `CONTAINER_AVAILABILITY_CHECK_MODE` app setting. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash:
Secure Shell (SSH) is commonly used to execute administrative commands remotely
4. Rebuild and push the Docker image to the registry, and then test the Web App SSH feature on Azure portal.
-Further troubleshooting information is available at the Azure App Service OSS blog: [Enabling SSH on Linux Web App for Containers](https://azureossd.github.io/2022/04/27/2022-Enabling-SSH-on-Linux-Web-App-for-Containers/https://docsupdatetracker.net/index.html#troubleshooting)
+Further troubleshooting information is available at the Azure App Service blog: [Enabling SSH on Linux Web App for Containers](https://azureossd.github.io/2022/04/27/2022-Enabling-SSH-on-Linux-Web-App-for-Containers/https://docsupdatetracker.net/index.html#troubleshooting)
## Access diagnostic logs
In your *docker-compose.yml* file, map the `volumes` option to `${WEBAPP_STORAGE
wordpress: image: <image name:tag> volumes:
- - ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
- - ${WEBAPP_STORAGE_HOME}/phpmyadmin:/var/www/phpmyadmin
- - ${WEBAPP_STORAGE_HOME}/LogFiles:/var/log
+ - "${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html"
+ - "${WEBAPP_STORAGE_HOME}/phpmyadmin:/var/www/phpmyadmin"
+ - "${WEBAPP_STORAGE_HOME}/LogFiles:/var/log"
``` ### Preview limitations
The following lists show supported and unsupported Docker Compose configuration
- "version x.x" always needs to be the first YAML statement in the file - ports section must use quoted numbers-- image > volume section must be quoted and cannot have permissions definitions
+- image > volume section must be quoted and can't have permissions definitions
- volumes section must not have an empty curly brace after the volume name > [!NOTE]
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 4/1/2024 Last updated : 4/4/2024 # Use the side-by-side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview)
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
## 10. Get the inbound IP addresses for your new App Service Environment v3 and update dependent resources
-You have two App Service Environments at this stage in the migration process. Your apps are running in both environments. You need to update any dependent resources to use the new IP inbound address for your new App Service Environment v3. For internal facing (ILB) App Service Environments, you need to update your private DNS zones to point to the new inbound IP address. You should account for both the old and new inbound IP at this point. You can remove the dependencies on the previous IP address after you complete the next step.
+You have two App Service Environments at this stage in the migration process. Your apps are running in both environments. You need to update any dependent resources to use the new IP inbound address for your new App Service Environment v3. For internal facing (ILB) App Service Environments, you need to update your private DNS zones to point to the new inbound IP address. This step is where you can validate your new environment and make any remaining necessary updates to your dependent resources.
> [!IMPORTANT] > During the preview, the new inbound IP might be returned incorrectly due to a known bug. Open a support ticket to receive the correct IP addresses for your App Service Environment v3.
For ELB App Service Environments, get the public inbound IP address by running t
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.networkingConfiguration.externalInboundIpAddresses ```
-## 11. Redirect customer traffic and complete migration
+## 11. Redirect customer traffic, validate your App Service Environment v3, and complete migration
-This step is your opportunity to test and validate your new App Service Environment v3. Your App Service Environment v2 front ends are still running, but the backing compute is an App Service Environment v3. If you're able to access your apps without issues, that means you're ready to complete the migration.
+This step is your opportunity to test and validate your new App Service Environment v3. Your App Service Environment v2 front ends are still running, but the backing compute is an App Service Environment v3. If you're able to access your apps without issues that means you're ready to complete the migration. If you want to test your App Service Environment v3 front ends, you can do so by using the inbound IP address you got in the previous step.
-Once you confirm your apps are working as expected, you can redirect customer traffic to your new App Service Environment v3 front ends by running the following command. This command also deletes your old environment.
+Once you confirm your apps are working as expected, you can redirect customer traffic to your new App Service Environment v3 by running the following command. This command also deletes your old environment.
+
+If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to revert the migration. Don't run the DNS change command if you need to revert the migration. For more information, see [Revert migration](./side-by-side-migrate.md#redirect-customer-traffic-validate-your-app-service-environment-v3-and-complete-migration).
```azurecli az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=DnsChange&api-version=2022-03-01"
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties
During this step, you get a status of `CompletingMigration`. When you get a status of `MigrationCompleted`, the traffic redirection step is done and your migration is complete.
-If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to revert the migration. Don't run the above command if you need to revert the migration. For more information, see [Revert migration](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration).
- ## Next steps > [!div class="nextstepaction"]
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the in-place migration fea
description: Overview of the in-place migration feature for migration to App Service Environment v3. Previously updated : 03/27/2024 Last updated : 04/08/2024
Your App Service Environment v3 can be deployed across availability zones in the
If your existing App Service Environment uses a custom domain suffix, you're prompted to configure a custom domain suffix for your new App Service Environment v3. You need to provide the custom domain name, managed identity, and certificate. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). You must configure a custom domain suffix for your new environment even if you no longer want to use it. Once migration is complete, you can remove the custom domain suffix configuration if needed.
-If your migration includes a custom domain suffix, for App Service Environment v3, the custom domain isn't displayed in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly.
+If your migration includes a custom domain suffix, for App Service Environment v3, the custom domain isn't displayed in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. Also, on App Service Environment v2, if you have a custom domain suffix, the default host name includes your custom domain suffix and is in the form *APP-NAME.internal.contoso.com*. On App Service Environment v3, the default host name always uses the default domain suffix and is in the form *APP-NAME.ASE-NAME.appserviceenvironment.net*. This difference is because App Service Environment v3 keeps the default domain suffix when you add a custom domain suffix. With App Service Environment v2, there's only a single domain suffix.
### Migrate to App Service Environment v3
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
Title: Migrate to App Service Environment v3 by using the side-by-side migration
description: Overview of the side-by-side migration feature for migration to App Service Environment v3. Previously updated : 3/28/2024 Last updated : 4/8/2024
The platform creates the [the new outbound IP addresses](networking.md#addresses
When completed, the new outbound IPs that your future App Service Environment v3 uses are created. These new IPs have no effect on your existing environment.
-You receive the new inbound IP address once migration is complete but before you make the [DNS change to redirect customer traffic to your new App Service Environment v3](#redirect-customer-traffic-and-complete-migration). You don't get the inbound IP at this point in the process because there are dependencies on App Service Environment v3 resources that get created during the migration step. You have a chance to update any resources that are dependent on the new inbound IP before you redirect traffic to your new App Service Environment v3.
+You receive the new inbound IP address once migration is complete but before you make the [DNS change to redirect customer traffic to your new App Service Environment v3](#redirect-customer-traffic-validate-your-app-service-environment-v3-and-complete-migration). You don't get the inbound IP at this point in the process because there are dependencies on App Service Environment v3 resources that get created during the migration step. You have a chance to update any resources that are dependent on the new inbound IP before you redirect traffic to your new App Service Environment v3.
This step is also where you decide if you want to enable zone redundancy for your new App Service Environment v3. Zone redundancy can be enabled as long as your App Service Environment v3 is [in a region that supports zone redundancy](./overview.md#regions).
Azure Policy can be used to deny resource creation and modification to certain p
If your existing App Service Environment uses a custom domain suffix, you must configure a custom domain suffix for your new App Service Environment v3. Custom domain suffix on App Service Environment v3 is implemented differently than on App Service Environment v2. You need to provide the custom domain name, managed identity, and certificate, which must be stored in Azure Key Vault. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). If your App Service Environment v2 has a custom domain suffix, you must configure a custom domain suffix for your new environment even if you no longer want to use it. Once migration is complete, you can remove the custom domain suffix configuration if needed.
+If your migration includes a custom domain suffix, for App Service Environment v3, the custom domain isn't displayed in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. Also, on App Service Environment v2, if you have a custom domain suffix, the default host name includes your custom domain suffix and is in the form *APP-NAME.internal.contoso.com*. On App Service Environment v3, the default host name always uses the default domain suffix and is in the form *APP-NAME.ASE-NAME.appserviceenvironment.net*. This difference is because App Service Environment v3 keeps the default domain suffix when you add a custom domain suffix. With App Service Environment v2, there's only a single domain suffix.
+ ### Migrate to App Service Environment v3 After completing the previous steps, you should continue with migration as soon as possible.
Side-by-side migration requires a three to six hour service window for App Servi
- The new App Service Environment v3 is created in the subnet you selected. - Your new App Service plans are created in the new App Service Environment v3 with the corresponding Isolated v2 tier. - Your apps are created in the new App Service Environment v3.-- The underlying compute for your apps is moved to the new App Service Environment v3. Your App Service Environment v2 front ends are still serving traffic. The migration process doesn't redirect to the App Service Environment v3 front ends until you complete the final step of the migration.
+- The underlying compute for your apps is moved to the new App Service Environment v3. Your App Service Environment v2 front ends are still serving traffic. Your old inbound IP address remains in use.
+ - For ILB App Service Environments, your App Service Environment v3 front ends aren't used until you update your private DNS zones with the new inbound IP address.
+ - For ELB App Service Environments, the migration process doesn't redirect to the App Service Environment v3 front ends until you complete the final step of the migration.
When this step completes, your application traffic is still going to your old App Service Environment front ends and the inbound IP that was assigned to it. However, you also now have an App Service Environment v3 with all of your apps. ### Get the inbound IP address for your new App Service Environment v3 and update dependent resources
-The new inbound IP address is given so that you can set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md) and update any of your private DNS zones. Don't move on to the next step until you account for these changes. There's downtime if you don't update dependent resources with the new inbound IP. **It's your responsibility to update any and all resources that are impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+The new inbound IP address is given so that you can set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md) and update any of your private DNS zones. Don't move on to the next step until you make these changes. There's downtime if you don't update dependent resources with the new inbound IP. **It's your responsibility to update any and all resources that are impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
-### Redirect customer traffic and complete migration
+### Redirect customer traffic, validate your App Service Environment v3, and complete migration
-The final step is to redirect traffic to your new App Service Environment v3 and complete the migration. The platform does this change for you, but only when you initiate it. Before you do this step, you should review your new App Service Environment v3 and perform any needed testing to validate that it's functioning as intended. Your App Service Environment v2 front ends are still running, but the backing compute is an App Service Environment v3. If you're able to access your apps without issues, that means you're ready to complete the migration.
+The final step is to redirect traffic to your new App Service Environment v3 and complete the migration. The platform does this change for you, but only when you initiate it. Before you do this step, you should review your new App Service Environment v3 and perform any needed testing to validate that it's functioning as intended. Your App Service Environment v2 front ends are still running, but the backing compute is an App Service Environment v3. If you're using an ILB App Service Environment v3, you can test your App Service Environment v3 front ends by updating your private DNS zones with the new inbound IP address. Testing this change allows you to fully validate your App Service Environment v3 before initiating the final step of the migration where your old App Service Environment is deleted.
Once you're ready to redirect traffic, you can complete the final step of the migration. This step updates internal DNS records to point to the load balancer IP address of your new App Service Environment v3 and the front ends that were created during the migration. Changes are effective within a couple minutes. If you run into issues, check your cache and TTL settings. This step also shuts down your old App Service Environment and deletes it. Your new App Service Environment v3 is now your production environment.
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
This page is your one-stop shop for guidance and resources to help you upgrade s
|-||| |**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using one of the automated migration features. Decide whether an in-place or side-by-side migration is right for your use case.<br><br>- [Migration path decision tree](#migration-path-decision-tree)<br>- [Automated upgrade using the in-place migration feature](migrate.md)<br>- [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)| |**2**|**Migrate**|Based on results of your review, either upgrade using one of the automated migration features or follow the manual steps.<br><br>- [Use the in-place automated migration feature](how-to-migrate.md)<br>- [Use the side-by-side automated migration feature](how-to-side-by-side-migrate.md)<br>- [Migrate manually](migration-alternatives.md)|
-|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side-by-side migration feature, you have the opportunity to [test and validate your App Service Environment v3](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
+|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side-by-side migration feature, you have the opportunity to [test and validate your App Service Environment v3](./side-by-side-migrate.md#redirect-customer-traffic-validate-your-app-service-environment-v3-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
|**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)| |**5**|**Learn more**|On-demand: [Learn Live webinar with Azure FastTrack Architects](https://www.youtube.com/watch?v=lI9TK_v-dkg&ab_channel=MicrosoftDeveloper).<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
app-service Provision Resource Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-bicep.md
To deploy a different language stack, update `linuxFxVersion` with appropriate v
| **PHP** | linuxFxVersion="PHP&#124;7.4" | | **Node.js** | linuxFxVersion="NODE&#124;10.15" | | **Java** | linuxFxVersion="JAVA&#124;1.8 &#124;TOMCAT&#124;9.0" |
-| **Python** | linuxFxVersion="PYTHON&#124;3.7" |
+| **Python** | linuxFxVersion="PYTHON&#124;3.8" |
application-gateway Ipv6 Application Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-portal.md
description: Learn how to configure Application Gateway with a frontend public I
Previously updated : 03/17/2024 Last updated : 04/04/2024
azure-app-configuration Concept Enable Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-enable-rbac.md
Title: Authorize access to Azure App Configuration using Microsoft Entra ID
-description: Enable Azure RBAC to authorize access to your Azure App Configuration instance
+description: Enable Azure RBAC to authorize access to your Azure App Configuration instance.
Last updated 05/26/2020
# Authorize access to Azure App Configuration using Microsoft Entra ID
-Besides using Hash-based Message Authentication Code (HMAC), Azure App Configuration supports using Microsoft Entra ID to authorize requests to App Configuration instances. Microsoft Entra ID allows you to use Azure role-based access control (Azure RBAC) to grant permissions to a security principal. A security principal may be a user, a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) or an [application service principal](../active-directory/develop/app-objects-and-service-principals.md). To learn more about roles and role assignments, see [Understanding different roles](../role-based-access-control/overview.md).
+Besides using Hash-based Message Authentication Code (HMAC), Azure App Configuration supports using Microsoft Entra ID to authorize requests to App Configuration instances. Microsoft Entra ID allows you to use Azure role-based access control (Azure RBAC) to grant permissions to a security principal. A security principal may be a user, a [managed identity](../active-directory/managed-identities-azure-resources/overview.md), or an [application service principal](../active-directory/develop/app-objects-and-service-principals.md). To learn more about roles and role assignments, see [Understanding different roles](../role-based-access-control/overview.md).
## Overview Requests made by a security principal to access an App Configuration resource must be authorized. With Microsoft Entra ID, access to a resource is a two-step process:
-1. The security principal's identity is authenticated and an OAuth 2.0 token is returned. The resource name to request a token is `https://login.microsoftonline.com/{tenantID}` where `{tenantID}` matches the Microsoft Entra tenant ID to which the service principal belongs.
+1. The security principal's identity is authenticated and an OAuth 2.0 token is returned. The resource name to request a token is `https://login.microsoftonline.com/{tenantID}` where `{tenantID}` matches the Microsoft Entra tenant ID to which the service principal belongs.
2. The token is passed as part of a request to the App Configuration service to authorize access to the specified resource.
-The authentication step requires that an application request contains an OAuth 2.0 access token at runtime. If an application is running within an Azure entity, such as an Azure Functions app, an Azure Web App, or an Azure VM, it can use a managed identity to access the resources. To learn how to authenticate requests made by a managed identity to Azure App Configuration, see [Authenticate access to Azure App Configuration resources with Microsoft Entra ID and managed identities for Azure Resources](howto-integrate-azure-managed-service-identity.md).
+The authentication step requires that an application request contains an OAuth 2.0 access token at runtime. If an application is running within an Azure entity, such as an Azure Functions app, an Azure Web App, or an Azure VM, it can use a managed identity to access the resources. To learn how to authenticate requests made by a managed identity to Azure App Configuration, see [Authenticate access to Azure App Configuration resources with Microsoft Entra ID and managed identities for Azure Resources](howto-integrate-azure-managed-service-identity.md).
The authorization step requires that one or more Azure roles be assigned to the security principal. Azure App Configuration provides Azure roles that encompass sets of permissions for App Configuration resources. The roles that are assigned to a security principal determine the permissions provided to the principal. For more information about Azure roles, see [Azure built-in roles for Azure App Configuration](#azure-built-in-roles-for-azure-app-configuration).
When an Azure role is assigned to a Microsoft Entra security principal, Azure gr
## Azure built-in roles for Azure App Configuration Azure provides the following Azure built-in roles for authorizing access to App Configuration data using Microsoft Entra ID: -- **App Configuration Data Owner**: Use this role to give read/write/delete access to App Configuration data. This does not grant access to the App Configuration resource.-- **App Configuration Data Reader**: Use this role to give read access to App Configuration data. This does not grant access to the App Configuration resource.-- **Contributor** or **Owner**: Use this role to manage the App Configuration resource. It grants access to the resource's access keys. While the App Configuration data can be accessed using access keys, this role does not grant direct access to the data using Microsoft Entra ID. This role is required if you access the App Configuration data via ARM template, Bicep, or Terraform during deployment. For more information, see [authorization](quickstart-resource-manager.md#authorization).-- **Reader**: Use this role to give read access to the App Configuration resource. This does not grant access to the resource's access keys, nor to the data stored in App Configuration.
+- **App Configuration Data Owner**: Use this role to give read/write/delete access to App Configuration data. This role doesn't grant access to the App Configuration resource.
+- **App Configuration Data Reader**: Use this role to give read access to App Configuration data. This role doesn't grant access to the App Configuration resource.
+- **Contributor** or **Owner**: Use this role to manage the App Configuration resource. It grants access to the resource's access keys. While the App Configuration data can be accessed using access keys, this role doesn't grant direct access to the data using Microsoft Entra ID. This role is required if you access the App Configuration data via ARM template, Bicep, or Terraform during deployment. For more information, see [deployment](quickstart-deployment-overview.md).
+- **Reader**: Use this role to give read access to the App Configuration resource. This role doesn't grant access to the resource's access keys, nor to the data stored in App Configuration.
> [!NOTE] > After a role assignment is made for an identity, allow up to 15 minutes for the permission to propagate before accessing data stored in App Configuration using this identity.
azure-app-configuration Howto Disable Access Key Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-access-key-authentication.md
Title: Disable access key authentication for an Azure App Configuration instance
-description: Learn how to disable access key authentication for an Azure App Configuration instance
+description: Learn how to disable access key authentication for an Azure App Configuration instance.
When you disable access key authentication for an Azure App Configuration resour
## Disable access key authentication
-Disabling access key authentication will delete all access keys. If any running applications are using access keys for authentication they will begin to fail once access key authentication is disabled. Enabling access key authentication again will generate a new set of access keys and any applications attempting to use the old access keys will still fail.
+Disabling access key authentication will delete all access keys. If any running applications are using access keys for authentication, they will begin to fail once access key authentication is disabled. Enabling access key authentication again will generate a new set of access keys and any applications attempting to use the old access keys will still fail.
> [!WARNING] > If any clients are currently accessing data in your Azure App Configuration resource with access keys, then Microsoft recommends that you migrate those clients to [Microsoft Entra ID](./concept-enable-rbac.md) before disabling access key authentication.
-> Additionally, it is recommended to read the [limitations](#limitations) section below to verify the limitations won't affect the intended usage of the resource.
# [Azure portal](#tab/portal) To disallow access key authentication for an Azure App Configuration resource in the Azure portal, follow these steps: 1. Navigate to your Azure App Configuration resource in the Azure portal.
-2. Locate the **Access keys** setting under **Settings**.
+2. Locate the **Access settings** setting under **Settings**.
- :::image type="content" border="true" source="./media/access-keys-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade":::
+ :::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade.":::
3. Set the **Enable access keys** toggle to **Disabled**.
The capability to disable access key authentication using the Azure CLI is in de
### Verify that access key authentication is disabled
-To verify that access key authentication is no longer permitted, a request can be made to list the access keys for the Azure App Configuration resource. If access key authentication is disabled there will be no access keys and the list operation will return an empty list.
+To verify that access key authentication is no longer permitted, a request can be made to list the access keys for the Azure App Configuration resource. If access key authentication is disabled, there will be no access keys, and the list operation will return an empty list.
# [Azure portal](#tab/portal) To verify access key authentication is disabled for an Azure App Configuration resource in the Azure portal, follow these steps: 1. Navigate to your Azure App Configuration resource in the Azure portal.
-2. Locate the **Access keys** setting under **Settings**.
+2. Locate the **Access settings** setting under **Settings**.
- :::image type="content" border="true" source="./media/access-keys-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade":::
+ :::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade.":::
3. Verify there are no access keys displayed and **Enable access keys** is toggled to **Disabled**.
az appconfig credential list \
--resource-group <resource-group> ```
-If access key authentication is disabled then an empty list will be returned.
+If access key authentication is disabled, then an empty list will be returned.
``` C:\Users\User>az appconfig credential list -g <resource-group> -n <app-configuration-name>
These roles do not provide access to data in an Azure App Configuration resource
Role assignments must be scoped to the level of the Azure App Configuration resource or higher to permit a user to allow or disallow access key authentication for the resource. For more information about role scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md).
-Be careful to restrict assignment of these roles only to those who require the ability to create an App Configuration resource or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../role-based-access-control/best-practices.md).
+Be careful to restrict assignment of these roles only to those users who require the ability to create an App Configuration resource or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../role-based-access-control/best-practices.md).
> [!NOTE] > The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage App Configuration resources. For more information, see [Azure roles, Microsoft Entra roles, and classic subscription administrator roles](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
-## Limitations
-
-The capability to disable access key authentication has the following limitation:
-
-### ARM template access
-
-When access key authentication is disabled, the capability to read/write key-values in an [ARM template](./quickstart-resource-manager.md) will be disabled as well. This is because access to the Microsoft.AppConfiguration/configurationStores/keyValues resource used in ARM templates requires an Azure Resource Manager role, such as contributor or owner. When access key authentication is disabled, access to the resource requires one of the Azure App Configuration [data plane roles](concept-enable-rbac.md), therefore ARM template access is rejected.
+> [!NOTE]
+> When access key authentication is disabled and [ARM authentication mode](./quickstart-deployment-overview.md#azure-resource-manager-authentication-mode) of App Configuration store is local, the capability to read/write key-values in an [ARM template](./quickstart-resource-manager.md) will be disabled as well. This is because access to the Microsoft.AppConfiguration/configurationStores/keyValues resource used in ARM templates requires access key authentication with local ARM authentication mode. It's recommended to use pass-through ARM authentication mode. For more information, see [Deployment overview](./quickstart-deployment-overview.md).
## Next steps
azure-app-configuration Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-bicep.md
This quickstart describes how you can use Bicep to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Authorization
+
+Managing an Azure App Configuration resource with Bicep file requires an Azure Resource Manager role, such as contributor or owner. Accessing Azure App Configuration data (key-values, snapshots) requires an Azure Resource Manager role and an additional Azure App Configuration [data plane role](concept-enable-rbac.md) when the configuration store's ARM authentication mode is set to [pass-through](./quickstart-deployment-overview.md#azure-resource-manager-authentication-mode) ARM authentication mode.
+ ## Review the Bicep file The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/app-configuration-store-kv/).
azure-app-configuration Quickstart Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-deployment-overview.md
+
+ Title: Deployment overview
+
+description: Learn how to use Azure App Configuration in deployment.
++ Last updated : 03/15/2024+++++
+# Deployment
+
+Azure App Configuration supports the following methods to read and manage your configuration during deployment:
+
+- [ARM template](./quickstart-resource-manager.md)
+- [Bicep](./quickstart-bicep.md)
+- Terraform
+
+## Manage Azure App Configuration resources in deployment
+
+### Azure Resource Manager Authorization
+
+You must have Azure Resource Manager permissions to manage Azure App Configuration resources. Azure role-based access control (Azure RBAC) roles that provide these permissions include the Microsoft.AppConfiguration/configurationStores/write or Microsoft.AppConfiguration/configurationStores/* action. Built-in roles with this action include:
+
+- Owner
+- Contributor
+
+To learn more about Azure RBAC and Microsoft Entra ID, see [Authorize access to Azure App Configuration using Microsoft Entra ID](./concept-enable-rbac.md).
+
+## Manage Azure App Configuration data in deployment
+
+Azure App Configuration data, such as key-values and snapshots, can be managed in deployment. When managing App Configuration data using this method, it's recommended to set your configuration store's Azure Resource Manager authentication mode to **Pass-through**. This authentication mode ensures that data access requires a combination of data plane and Azure Resource Manager management roles and ensuring that data access can be properly attributed to the deployment caller for auditing purpose.
+
+### Azure Resource Manager authentication mode
+
+# [Azure portal](#tab/portal)
+
+To configure the Azure Resource Manager authentication mode of an Azure App Configuration resource in the Azure portal, follow these steps:
+
+1. Navigate to your Azure App Configuration resource in the Azure portal
+2. Locate the **Access settings** setting under **Settings**
+
+ :::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access settings blade.":::
+
+3. Select the recommended **Pass-through** authentication mode under **Azure Resource Manager Authentication Mode**
+
+ :::image type="content" border="true" source="./media/quickstarts/deployment/select-passthrough-authentication-mode.png" alt-text="Screenshot showing pass-through authentication mode being selected under Azure Resource Manager Authentication Mode.":::
+++
+> [!NOTE]
+> Local authentication mode is for backward compatibility and has several limitations. It does not support proper auditing for accessing data in deployment. Under local authentication mode, key-value data access inside an ARM template/Bicep/Terraform is disabled if [access key authentication is disabled](./howto-disable-access-key-authentication.md). Azure App Configuration data plane permissions are not required for accessing data under local authentication mode.
+
+### Azure App Configuration Authorization
+
+When your App Configuration resource has its Azure Resource Manager authentication mode set to **Pass-through**, you must have Azure App Configuration data plane permissions to read and manage Azure App Configuration data in deployment. This requirement is in addition to baseline management permission requirements of the resource. Azure App Configuration data plane permissions include Microsoft.AppConfiguration/configurationStores/\*/read and Microsoft.AppConfiguration/configurationStores/\*/write. Built-in roles with this action include:
+
+- App Configuration Data Owner
+- App Configuration Data Reader
+
+To learn more about Azure RBAC and Microsoft Entra ID, see [Authorize access to Azure App Configuration using Microsoft Entra ID](./concept-enable-rbac.md).
+
+### Private network access
+
+When an App Configuration resource is restricted to private network access, deployments accessing App Configuration data through public networks will be blocked. To enable successful deployments when access to an App Configuration resource is restricted to private networks the following actions must be taken:
+
+- [Azure Resource Management Private Link](../azure-resource-manager/management/create-private-link-access-portal.md) must be set up
+- The App Configuration resource must have Azure Resource Manager authentication mode set to **Pass-through**
+- The App Configuration resource must have Azure Resource Manager private network access enabled
+- Deployments accessing App Configuration data must run through the configured Azure Resource Manager private link
+
+If all of these criteria are met, then deployments accessing App Configuration data will be successful.
+
+# [Azure portal](#tab/portal)
+
+To enable Azure Resource Manager private network access for an Azure App Configuration resource in the Azure portal, follow these steps:
+
+1. Navigate to your Azure App Configuration resource in the Azure portal
+2. Locate the **Networking** setting under **Settings**
+
+ :::image type="content" border="true" source="./media/networking-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources networking blade.":::
+
+3. Check **Enable Azure Resource Manager Private Access** under **Private Access**
+
+ :::image type="content" border="true" source="./media/quickstarts/deployment/enable-azure-resource-manager-private-access.png" alt-text="Screenshot showing Enable Azure Resource Manager Private Access is checked.":::
+
+> [!NOTE]
+> Azure Resource Manager private network access can only be enabled under **Pass-through** authentication mode.
+++
+## Next steps
+
+To learn about deployment using ARM template and Bicep, check the documentations linked below.
+
+- [Quickstart: Create an Azure App Configuration store by using an ARM template](./quickstart-resource-manager.md)
+- [Quickstart: Create an Azure App Configuration store using Bicep](./quickstart-bicep.md)
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-resource-manager.md
# Quickstart: Create an Azure App Configuration store by using an ARM template
-This quickstart describes how to :
+This quickstart describes how to:
- Deploy an App Configuration store using an Azure Resource Manager template (ARM template). - Create key-values in an App Configuration store using ARM template.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Authorization
-Accessing key-value data inside an ARM template requires an Azure Resource Manager role, such as contributor or owner. Access via one of the Azure App Configuration [data plane roles](concept-enable-rbac.md) currently is not supported.
-
-> [!NOTE]
-> Key-value data access inside an ARM template is disabled if access key authentication is disabled. For more information, see [disable access key authentication](./howto-disable-access-key-authentication.md#limitations).
+Managing Azure App Configuration resource inside an ARM template requires Azure Resource Manager role, such as contributor or owner. Accessing Azure App Configuration data (key-values, snapshots) requires Azure Resource Manager role and Azure App Configuration [data plane role](concept-enable-rbac.md) under [pass-through](./quickstart-deployment-overview.md#azure-resource-manager-authentication-mode) ARM authentication mode.
## Review the template
azure-arc Choose Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/choose-service.md
+
+ Title: Choosing the right Azure Arc service for machines
+description: Learn about the different services offered by Azure Arc and how to choose the right one for your machines.
Last updated : 04/08/2024+++
+# Choosing the right Azure Arc service for machines
+
+Azure Arc offers different services based on your existing IT infrastructure and management needs. Before onboarding your resources to Azure Arc-enabled servers, you should investigate the different Azure Arc offerings to determine which best suits your requirements. Choosing the right Azure Arc service provides the best possible inventorying and management of your resources.
+
+There are several different ways you can connect your existing Windows and Linux machines to Azure Arc:
+
+- Azure Arc-enabled servers
+- Azure Arc-enabled VMware vSphere
+- Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)
+- Azure Arc-enabled Azure Stack HCI
+
+Each of these services extends the Azure control plane to your existing infrastructure and enables the use of [Azure security, governance, and management capabilities using the Connected Machine agent](/azure/azure-arc/servers/overview). Other services besides Azure Arc-enabled servers also use an [Azure Arc resource bridge](/azure/azure-arc/resource-bridge/overview), a part of the core Azure Arc platform that provides self-servicing and additional management capabilities.
+
+General recommendations about the right service to use are as follows:
+
+|If your machine is a... |...connect to Azure with... |
+|||
+|VMware VM (not running on AVS) |[Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) |
+|Azure VMware Solution (AVS) VM |[Azure Arc-enabled VMware vSphere for Azure VMware Solution](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) |
+|VM managed by System Center Virtual Machine Manager |[Azure Arc-enabled SCVMM](vmware-vsphere/overview.md) |
+|Azure Stack HCI VM |[Arc-enabled Azure Stack HCI](/azure-stack/hci/overview) |
+|Physical server |[Azure Arc-enabled servers](servers/overview.md) |
+|VM on another hypervisor |[Azure Arc-enabled servers](servers/overview.md) |
+|VM on another cloud provider |[Azure Arc-enabled servers](servers/overview.md) |
+
+If you're unsure about which of these services to use, you can start with Azure Arc-enabled servers and add a resource bridge for additional management capabilities later. Azure Arc-enabled servers allows you to connect servers containing all of the types of VMs supported by the other services and provides a wide range of capabilities such as Azure Policy and monitoring, while adding resource bridge can extend additional capabilities.
+
+Region availability also varies between Azure Arc services, so you may need to use Azure Arc-enabled servers if a more specialized version of Azure Arc is unavailable in your preferred region. See [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc&regions=all&rar=true) to learn more about region availability for Azure Arc services.
+
+Where your machine runs determines the best Azure Arc service to use. Organizations with diverse infrastructure may end up using more than one Azure Arc service; this is alright. The core set of features remains the same no matter which Azure Arc service you use.
+
+## Azure Arc-enabled servers
+
+[Azure Arc-enabled servers](servers/overview.md) lets you manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud provider. When connecting your machine to Azure Arc-enabled servers, you can perform various operational functions similar to native Azure virtual machines.
+
+### Capabilities
+
+- Govern: Assign Azure Automanage machine configurations to audit settings within the machine. Utilize Azure Policy pricing guide for cost understanding.
+
+- Protect: Safeguard non-Azure servers with Microsoft Defender for Endpoint, integrated through Microsoft Defender for Cloud. This includes threat detection, vulnerability management, and proactive security monitoring. Utilize Microsoft Sentinel for collecting security events and correlating them with other data sources.
+
+- Configure: Employ Azure Automation for managing tasks using PowerShell and Python runbooks. Use Change Tracking and Inventory for assessing configuration changes. Utilize Update Management for handling OS updates. Perform post-deployment configuration and automation tasks using supported Azure Arc-enabled servers VM extensions.
+
+- Monitor: Utilize VM insights for monitoring OS performance and discovering application components. Collect log data, such as performance data and events, through the Log Analytics agent, storing it in a Log Analytics workspace.
+
+- Procure Extended Security Updates (ESUs) at scale for your Windows Server 2012 and 2012R2 machines running on vCenter managed estate.
+
+> [!IMPORTANT]
+> Azure Arc-enabled VMware vSphere and Azure Arc-enabled SCVMM have all the capabilities of Azure Arc-enabled servers, but also provide specific, additional capabilities.
+>
+## Azure Arc-enabled VMware vSphere
+
+[Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) simplifies the management of hybrid IT resources distributed across VMware vSphere and Azure.
+
+Running software in Azure VMware Solution, as a private cloud in Azure, offers some benefits not realized by operating your environment outside of Azure. For software running in a VM, such as SQL Server and Windows Server, running in Azure VMware Solution provides additional value such as free Extended Security Updates (ESUs).
+
+To take advantage of these benefits if you're running in an Azure VMware Solution, it's important to follow respective [onboarding](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) processes to fully integrate the experience with the AVS private cloud.
+
+Additionally, when a VM in Azure VMware Solution private cloud is Azure Arc-enabled using a method distinct from the one outlined in the AVS public document, the steps are provided in the [document](/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows) to refresh the integration between the Azure Arc-enabled VMs and Azure VMware Solution.
+
+### Capabilities
+
+- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Azure Arc at scale.
+
+- Perform various virtual machine (VM) operations directly from Azure, such as create, resize, delete, and power cycle operations such as start/stop/restart on VMware VMs consistently with Azure.
+
+- Empower developers and application teams to self-serve VM operations on-demand using Azure role-based access control (RBAC).
+
+- Install the Azure Arc-connected machine agent at scale on VMware VMs to govern, protect, configure, and monitor them.
+
+- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.
+
+## Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)
+
+[Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md) (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal.
+
+Azure Arc-enabled System Center Virtual Machine Manager also allows you to manage your hybrid environment consistently and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations.
+
+### Capabilities
+
+- Discover and onboard existing SCVMM managed VMs to Azure.
+
+- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure.
+
+- Empower developers and application teams to self-serve VM operations on demand using Azure role-based access control (RBAC).
+
+- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.
+
+- Install the Azure Arc-connected machine agents at scale on SCVMM VMs to govern, protect, configure, and monitor them.
+
+## Azure Stack HCI
+
+[Azure Stack HCI](/azure-stack/hci/overview) is a hyperconverged infrastructure operating system delivered as an Azure service. This is a hybrid solution that is designed to host virtualized Windows and Linux VM or containerized workloads and their storage. Azure Stack HCI is a hybrid product that is offered on validated hardware and connects on-premises estates to Azure, enabling cloud-based services, monitoring and management. This helps customers manage their infrastructure from Azure and run virtualized workloads on-premises, making it easy for them to consolidate aging infrastructure and connect to Azure.
+
+> [!NOTE]
+> Azure Stack HCI comes with Azure resource bridge installed and uses the Azure Arc control plane for infrastructure and workload management, allowing you to monitor, update, and secure your HCI infrastructure from the Azure portal.
+>
+
+### Capabilities
+
+- Deploy and manage workloads, including VMs and Kubernetes clusters from Azure through the Azure Arc resource bridge.
+
+- Manage VM lifecycle operations such as start, stop, delete from Azure control plane.
+
+- Manage Kubernetes lifecycle operations such as scale, update, upgrade, and delete clusters from Azure control plane.
+
+- Install Azure connected machine agent and Azure Arc-enabled Kubernetes agent on your VM and Kubernetes clusters to use Azure services (i.e., Azure Monitor, Azure Defender for cloud, etc.).
+
+- Leverage Azure Virtual Desktop for Azure Stack HCI to deploy session hosts on to your on-premises infrastructure to better meet your performance or data locality requirements.
+
+- Empower developers and application teams to self-serve VM and Kubernetes cluster operations on demand using Azure role-based access control (RBAC).
+
+- Monitor, update, and secure your Azure Stack HCI infrastructure and workloads across fleets of locations directly from the Azure portal.
+
+- Deploy and manage static and DHCP-based logical networks on-premises to host your workloads.
+
+- VM image management with Azure Marketplace integration and ability to bring your own images from Azure storage account and cluster shared volumes.
+
+- Create and manage storage paths to store your VM disks and config files.
+
+## Capabilities at a glance
+
+The following table provides a quick way to see the major capabilities of the three Azure Arc services that connect your existing Windows and Linux machines to Azure Arc.
+
+| |Arc-enabled servers |Arc-enabled VMware vSphere |Arc-enabled SCVMM |Arc-enabled Azure Stack HCI |SQL Server enabled by Azure Arc |
+||||||
+|Microsoft Defender for Cloud |Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Microsoft Sentinel | Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Azure Automation |Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Azure Update Manager |Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|VM extensions |Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Azure Monitor |Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Extended Security Updates for Windows Server 2012/2012R2 |Γ£ô |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Discover & onboard VMs to Azure | |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Lifecycle operations (start/stop VMs, etc.) | |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+|Self-serve VM provisioning | |Γ£ô |Γ£ô |Γ£ô |Γ£ô |
+
+## Switching from Arc-enabled servers to another service
+
+If you currently use Azure Arc-enabled servers, you can get the additional capabilities that come with Arc-enabled VMware vSphere or Arc-enabled SCVMM:
+
+- [Enable virtual hardware and VM CRUD capabilities in a machine with Azure Arc agent installed](/azure/azure-arc/vmware-vsphere/enable-virtual-hardware)
+
+- [Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Azure Arc agent installed](/azure/azure-arc/system-center-virtual-machine-manager/enable-virtual-hardware-scvmm)
+
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
This article provides information on troubleshooting and resolving issues that c
### Logs collection
-For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same management machine that was used to run commands to deploy the Arc resource bridge. If you are using a different machine to collect logs, you need to run the `az arcappliance get-credentials` command first before collecting logs.
+For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same management machine that was used to run commands to deploy the Arc resource bridge. If you're using a different machine to collect logs, you need to run the `az arcappliance get-credentials` command first before collecting logs.
If there's a problem collecting logs, most likely the management machine is unable to reach the Appliance VM. Contact your network administrator to allow SSH communication from the management machine to the Appliance VM on TCP port 22.
To collect Arc resource bridge logs for Azure Stack HCI using the appliance VM I
az arcappliance logs hci --ip <appliance VM IP> --cloudagent <cloud agent service IP/FQDN> --loginconfigfile <file path of kvatoken.tok> ```
-If you are unsure of your appliance VM IP, there is also the option to use the kubeconfig. You can retrieve the kubeconfig by running the [get-credentials command](/cli/azure/arcappliance) then run the logs command.
+If you're unsure of your appliance VM IP, there's also the option to use the kubeconfig. You can retrieve the kubeconfig by running the [get-credentials command](/cli/azure/arcappliance) then run the logs command.
To retrieve the kubeconfig and log key then collect logs for Arc-enabled VMware from a different machine than the one used to deploy Arc resource bridge for Arc-enabled VMware:
az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified
### Arc resource bridge is offline
-If the resource bridge is offline, this is typically due to a networking change in the infrastructure, environment or cluster that stops the appliance VM from being able to communicate with its counterpart Azure resource. If you are unable to determine what changed, you can reboot the appliance VM, collect logs and submit a support ticket for further investigation.
+If the resource bridge is offline, this is typically due to a networking change in the infrastructure, environment or cluster that stops the appliance VM from being able to communicate with its counterpart Azure resource. If you're unable to determine what changed, you can reboot the appliance VM, collect logs and submit a support ticket for further investigation.
### Remote PowerShell isn't supported
To resolve this problem, delete the resource bridge, register the providers, the
Arc resource bridge consists of an appliance VM that is deployed to the on-premises infrastructure. The appliance VM maintains a connection to the management endpoint of the on-premises infrastructure using locally stored credentials. If these credentials aren't updated, the resource bridge is no longer able to communicate with the management endpoint. This can cause problems when trying to upgrade the resource bridge or manage VMs through Azure. To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm).
-### Private Link is unsupported
+### Private link is unsupported
Arc resource bridge doesn't support private link. All calls coming from the appliance VM shouldn't be going through your private link setup. The Private Link IPs may conflict with the appliance IP pool range, which isn't configurable on the resource bridge. Arc resource bridge reaches out to [required URLs](network-requirements.md#firewallproxy-url-allowlist) that shouldn't go through a private link connection. You must deploy Arc resource bridge on a separate network segment unrelated to the private link setup.
To resolve this issue, reboot the resource bridge VM, and it should recover its
Be sure that the proxy server on your management machine trusts both the SSL certificate for your SSL proxy and the SSL certificate of the Microsoft download servers. For more information, see [SSL proxy configuration](network-requirements.md#ssl-proxy-configuration).
+### No such host - dp.kubernetesconfiguration.azure.com
+
+An error that contains `dial tcp: lookup westeurope.dp.kubernetesconfiguration.azure.com: no such host` while deploying Arc resource bridge means that the configuration dataplane is currently unavailable in the specified region. The service may be temporarily unavailable. Please wait for the service to be available and then retry the deployment.
+
+### No such host for Arc resource bridge required URL
+
+An error that contains an Arc resource bridge required URL with the message `no such host` indicates that DNS is not able to resolve the URL. The error may look similar to the example below, where the required URL is `https://msk8s.api.cdp.microsoft.com`:
+
+`Error: { _errorCode_: _InvalidEntityError_, _errorResponse_: _{\n\_message\_: \_Post \\\_https://msk8s.api.cdp.microsoft.com/api/v1.1/contents/default/namespaces/default/names/arc-appliance-stable-catalogs-ext/versions/latest?action=select\\\_: POST https://msk8s.api.cdp.microsoft.com/api/v1.1/contents/default/namespaces/default/names/arc-appliance-stable-catalogs-ext/versions/latest?action=select giving up after 6 attempt(s): Post \\\_https://msk8s.api.cdp.microsoft.com/api/v1.1/contents/default/namespaces/default/names/arc-appliance-stable-catalogs-ext/versions/latest?action=select\\\_: proxyconnect tcp: dial tcp: lookup http: no such host\_\n}_ }`
+
+This error can occur if the DNS settings provided during deployment are not correct or there is a problem with the DNS server(s). You can check if your DNS server is able to resolve the url by running the following command from the management machine or a machine that has access to the DNS server(s):
+
+```
+nslookup
+> set debug
+> <hostname> <DNS server IP>
+```
+
+In order to resolve the error, your DNS server(s) must be configured to resolve all Arc resource bridge required URLs and the DNS server(s) should be correctly provided during deployment of Arc resource bridge.
+ ### KVA timeout error
-While trying to deploy Arc Resource Bridge, a "KVA timeout error" might appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the management machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
+The KVA timeout error is a generic error that can be the result of a variety of network misconfigurations that involve the management machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
-For clarity, "management machine" refers to the machine where deployment CLI commands are being run. "Appliance VM" is the VM that hosts Arc resource bridge. "Control Plane IP" is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
+For clarity, management machine refers to the machine where deployment CLI commands are being run. Appliance VM is the VM that hosts Arc resource bridge. Control Plane IP is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
#### Top causes of the KVA timeout errorΓÇ»
To resolve the error, one or more network misconfigurations might need to be add
Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error.
-1. The management machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the management machine and verify there is a response from both IPs.
+1. The management machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the management machine and verify there's a response from both IPs.
If a request times out, the management machine can't communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the management machine to the Control Plane IP and Appliance VM IP.
To resolve the error, one or more network misconfigurations might need to be add
1. Appliance VM needs to be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to resolve external/internal addresses, such as Azure service addresses and container registry names for download of the Arc resource bridge container images from the cloud. Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files.-
+
## Move Arc resource bridge location Resource move of Arc resource bridge isn't currently supported. You'll need to delete the Arc resource bridge, then re-deploy it to the desired location.
To install Azure Arc resource bridge on an Azure Stack HCI cluster, `az arcappli
## Azure Arc-enabled VMware VCenter issues
-### `az arcappliance prepare` failure
+### vSphere SDK client 403 Forbidden or 404 not found
-The `arcappliance` extension for Azure CLI enables a [prepare](/cli/azure/arcappliance/prepare) command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
+If you receive an error that contains `errorCode_: _CreateConfigKvaCustomerError_, _errorResponse_: _error getting the vsphere sdk client: POST \_/sdk\_: 403 Forbidden` or `404 not found` while deploying Arc resource bridge, this is most likely due to an incorrect vCenter URL being provided during configuration file creation where you're prompted to enter the vCenter address as either FQDN or IP address. There are different ways to find your vCenter address. One option is to access the vSphere client via its web interface. The vCenter FQDN or IP address is typically what you use in the browser to access the vSphere client. If you're already logged in, you can look at the browser's address bar; the URL you use to access vSphere is your vCenter server's FQDN or IP address. Alternatively, after logging in, go to the Menu > Administration section. Under System Configuration, choose Nodes. Your vCenter server instance(s) will be listed there along with its FQDN. Verify your vCenter address and then re-try the deployment.
-```azurecli
-$ az arcappliance prepare vmware --config-file <path to config>
+### Pre-deployment validation errors
-Error: Error in reading OVA file: failed to parse ovf: strconv.ParseInt: parsing "3670409216":
-value out of range.
-```
+If you're receiving a variety of `pre-deployment validation of your download\upload connectivity wasn't successful` errors, such as:
+
+`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: Service Unavailable`
+
+`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: dial tcp 172.16.60.10:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.`
+
+`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: use of closed network connection.`
+
+`Pre-deployment validation of your download/upload connectivity wasn't successful. {\\n \\\_code\\\_: \\\_ImageProvisionError\\\_,\\n \\\_message\\\_: \\\_Post \\\\\\\_https://vcenter-server.com/nfc/unique-identifier/disk-0.vmdk\\\\\\\_: dial tcp: lookup hostname.domain: no such host`
+
+A combination of these errors usually indicates that the management machine has lost connection to the datastore or there's a networking issue causing the datastore to be unreachable. This connection is needed in order to upload the OVA from the management machine used to build the appliance VM in vCenter. The connection between the management machine and datastore needs to be reestablished, then retry deployment of Arc resource bridge.
+
+### x509 certificate has expired or isn't yet valid
+
+When you deploy Arc resource bridge, you may encounter the error:
+
+`Error: { _errorCode_: _PostOperationsError_, _errorResponse_: _{\n\_message\_: \_{\\n \\\_code\\\_: \\\_GuestInternetConnectivityError\\\_,\\n \\\_message\\\_: \\\_Not able to connect to https://msk8s.api.cdp.microsoft.com. Error returned: action failed after 3 attempts: Get \\\\\\\_https://msk8s.api.cdp.microsoft.com\\\\\\\_: x509: certificate has expired or isn't yet valid: current time 2022-01-18T11:35:56Z is before 2023-09-07T19:13:21Z. Arc Resource Bridge network and internet connectivity validation failed: http-connectivity-test-arc. 1. Please check your networking setup and ensure the URLs mentioned in : https://aka.ms/AAla73m are reachable from the Appliance VM. 2. Check firewall/proxy settings`
-This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. To resolve the error, install and use Azure CLI 64-bit.
+This error is caused when there's a clock/time difference between ESXi host(s) and the management machine where the deployment commands for Arc resource bridge are being executed. To resolve this issue, turn on NTP time sync on the ESXi host(s) and confirm that the management machine is also synced to NTP, then try the deployment again.
### Error during host configuration
-When you deploy the resource bridge on VMware vCenter, if you have been using the same template to deploy and delete the appliance multiple times, you might encounter the following error:
+If you have been using the same template to deploy and delete the Arc resource bridge multiple times, you might encounter the following error:
-`Appliance cluster deployment failed with error:
-Error: An error occurred during host configuration`
+`Appliance cluster deployment failed with error: Error: An error occurred during host configuration`
-To resolve this issue, delete the existing template manually. Then run [`az arcappliance prepare`](/cli/azure/arcappliance/prepare) to download a new template for deployment.
+To resolve this issue, manually delete the existing template. Then run [`az arcappliance prepare`](/cli/azure/arcappliance/prepare) to download a new template for deployment.
### Unable to find folders
-When deploying the resource bridge on VMware vCenter, you specify the folder in which the template and VM will be created. The folder must be VM and template folder type. Other types of folder, such as storage folders, network folders, or host and cluster folders, can't be used by the resource bridge deployment.
+When deploying the resource bridge on VMware vCenter, you specify the folder in which the template and VM will be created. The folder must be VM and template folder type. Other types of folder, such as storage folders, network folders, or host and cluster folders, can't be used for the resource bridge deployment.
### Insufficient permissions
azure-arc Onboard Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-server.md
Title: Connect Windows Server machines to Azure through Azure Arc Setup description: In this article, you learn how to connect Windows Server machines to Azure Arc using the built-in Windows Server Azure Arc Setup wizard. Previously updated : 10/12/2023 Last updated : 04/05/2024
Windows Server machines can be onboarded directly to [Azure Arc](https://azure.m
Onboarding to Azure Arc is not needed if the Windows Server machine is already running in Azure.
+For Windows Server 2022, Azure Arc Setup is an optional component that can be removed using the **Remove Roles and Features Wizard**. For Windows Server 2025 and later, Azure Arc Setup is a [Features On Demand](/windows-hardware/manufacture/desktop/features-on-demand-v2--capabilities?view=windows-11). Essentially, this means that the procedures for removal and enablement differ between OS versions. See for more information.
+ > [!NOTE]
-> This feature only applies to Windows Server 2022 and later. It was released in the [Cumulative Update of 10/10/2023](https://support.microsoft.com/en-us/topic/october-10-2023-kb5031364-os-build-20348-2031-7f1d69e7-c468-4566-887a-1902af791bbc).
+> The Azure Arc Setup feature only applies to Windows Server 2022 and later. It was released in the [Cumulative Update of 10/10/2023](https://support.microsoft.com/en-us/topic/october-10-2023-kb5031364-os-build-20348-2031-7f1d69e7-c468-4566-887a-1902af791bbc).
> ## Prerequisites
The Azure Arc system tray icon at the bottom of your Windows Server machine indi
## Uninstalling Azure Arc Setup
-To uninstall Azure Arc Setup, follow these steps:
+> [!NOTE]
+> Uninstalling Azure Arc Setup does not uninstall the Azure Connected Machine agent from the machine. For instructions on uninstalling the agent, see [Managing and maintaining the Connected Machine agent](manage-agent.md).
+>
+To uninstall Azure Arc Setup from a Windows Server 2022 machine:
-1. In the Server Manager, navigate to the **Remove Roles and Features Wizard**. (See [Remove roles, role services, and features by using the remove Roles and Features Wizard](/windows-server/administration/server-manager/install-or-uninstall-roles-role-services-or-features#remove-roles-role-services-and-features-by-using-the-remove-roles-and-features-wizard) for more information.)
+1. In the Server Manager, navigate to the **Remove Roles and Features Wizard**. (See [Remove roles, role services, and features by using the Remove Roles and Features Wizard](/windows-server/administration/server-manager/install-or-uninstall-roles-role-services-or-features#remove-roles-role-services-and-features-by-using-the-remove-roles-and-features-wizard) for more information.)
1. On the Features page, uncheck the box for **Azure Arc Setup**.
To uninstall Azure Arc Setup through PowerShell, run the following command:
Disable-WindowsOptionalFeature -Online -FeatureName AzureArcSetup ```
-> [!NOTE]
-> Uninstalling Azure Arc Setup does not uninstall the Azure Connected Machine agent from the machine. For instructions on uninstalling the agent, see [Managing and maintaining the Connected Machine agent](manage-agent.md).
->
+To uninstall Azure Arc Setup from a Windows Server 2025 machine:
+
+1. Open the Settings app on the machine and select **System**, then select **Optional features**.
+
+1. Select **AzureArcSetup**, and then select **Remove**.
++
+To uninstall Azure Arc Setup from a Windows Server 2025 machine from the command line, run the following line of code:
+
+`DISM /online /Remove-Capability /CapabilityName:AzureArcSetup~~~~`
## Next steps
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
You can install the Connected Machine agent manually, or on multiple machines at
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
+> [!NOTE]
+> For additional guidance regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md).
+>
+ ## Supported cloud operations When you connect your machine to Azure Arc-enabled servers, you can perform many operational functions, just as you would with native Azure virtual machines. Below are some of the key supported actions for connected machines.
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
To use the ACL integration, your client application must assume the identity of
> [!IMPORTANT] > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
+For information on using Microsoft Entra ID with Azure CLI, see the [references pages for identity](/cli/azure/redis/identity).
+ ## Using data access configuration with your cache If you would like to use a custom access policy instead of Redis Data Owner, go to the **Data Access Configuration** on the Resource menu. For more information, see [Configure a custom data access policy for your application](cache-configure-role-based-access-control.md#configure-a-custom-data-access-policy-for-your-application).
The following table includes links to code samples, which demonstrate how to con
- When calling the Redis server `AUTH` command periodically, consider adding a jitter so that the `AUTH` commands are staggered, and your Redis server doesn't receive lot of `AUTH` commands at the same time.
-## Next steps
+## Related content
- [Configure role-based access control with Data Access Policy](cache-configure-role-based-access-control.md)
+- [Reference pages for identity](/cli/azure/redis/identity)
+
azure-cache-for-redis Cache Best Practices Enterprise Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md
You might also see `CROSSSLOT` errors with Enterprise clustering policy. Only th
In Active-Active databases, multi-key write commands (`DEL`, `MSET`, `UNLINK`) can only be run on keys that are in the same slot. However, the following multi-key commands are allowed across slots in Active-Active databases: `MGET`, `EXISTS`, and `TOUCH`. For more information, see [Database clustering](https://docs.redis.com/latest/rs/databases/durability-ha/clustering/#multikey-operations).
+## Enterprise Flash Best Practices
+The Enterprise Flash tier utilizes both NVMe Flash storage and RAM. Because Flash storage is lower cost, using the Enterprise Flash tier allows you to trade off some performance for price efficiency.
+
+On Enterprise Flash instances, 20% of the cache space is on RAM, while the other 80% uses Flash storage. All of the _keys_ are stored on RAM, while the _values_ can be stored either in Flash storage or RAM. The location of the values is determined intelligently by the Redis software. "Hot" values that are accessed fequently are stored on RAM, while "Cold" values that are less commonly used are kept on Flash. Before data is read or written, it must be moved to RAM, becoming "Hot" data.
+
+Because Redis will optmize for the best performance, the instance will first fill up the available RAM before adding items to Flash storage. This has a few implications for performance:
+- When testing with low memory usage, performance and latency may be significantly better than with a full cache instance because only RAM is being used.
+- As you write more data to the cache, the proportion of data in RAM compared to Flash storage will decrease, typically causing latency and throughput performance to decrease as well.
+
+### Workloads well-suited for the Enterprise Flash tier
+Workloads that are likely to run well on the Enterprise Flash tier often have the following characteristics:
+- Read heavy, with a high ratio of read commands to write commands.
+- Access is focused on a subset of keys which are used much more frequently than the rest of the dataset.
+- Relatively large values in comparison to key names. (Since key names are always stored in RAM, this can become a bottleneck for memory growth.)
+
+### Workloads that are not well-suited for the Enterprise Flash tier
+Some workloads have access characteristics that are less optimized for the design of the Flash tier:
+- Write heavy workloads.
+- Random or uniform data access paterns across most of the dataset.
+- Long key names with relatively small value sizes.
+ ## Handling Region Down Scenarios with Active Geo-Replication Active geo-replication is a powerful feature to dramatically boost availability when using the Enterprise tiers of Azure Cache for Redis. You should take steps, however, to prepare your caches if there's a regional outage.
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Now, you create an Azure Cosmos DB account as a [serverless account type](../cos
|Prompt| Selection| |--|--|
- |**Select an Azure Database Server**| Choose **Core (SQL)** to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB](../cosmos-db/introduction.md). |
+ |**Select an Azure Database Server**| Choose **Core (NoSQL)** to create a document database that you can query by using a SQL syntax or a Query Copilot ([Preview](../cosmos-db/nosql/query/how-to-enable-use-copilot.md)) converting natural language prompts to queries. [Learn more about the Azure Cosmos DB](../cosmos-db/introduction.md). |
|**Account name**| Enter a unique name to identify your Azure Cosmos DB account. The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.| |**Select a capacity model**| Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode. |**Select a resource group for new resources**| Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). |
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
When you deploy your project to a function app in Azure, the entire contents of
To connect to Cosmos DB, first [create an account, database, and container](../cosmos-db/nosql/quickstart-portal.md). Then you may connect Functions to Cosmos DB using [trigger and bindings](functions-bindings-cosmosdb-v2.md), like this [example](functions-add-output-binding-cosmos-db-vs-code.md). You may also use the Python library for Cosmos DB, like so: ```python
-pip install azure-cosmos
+pip install azure-cosmos
+pip install aiohttp
-from azure.cosmos import CosmosClient, exceptions
+from azure.cosmos.aio import CosmosClient
+from azure.cosmos import exceptions
from azure.cosmos.partition_key import PartitionKey
+import asyncio
# Replace these values with your Cosmos DB connection information endpoint = "https://azure-cosmos-nosql.documents.azure.com:443/"
partition_key = "/partition_key"
# Set the total throughput (RU/s) for the database and container database_throughput = 1000
-# Initialize the Cosmos client
-client = CosmosClient(endpoint, key)
+# Helper function to get or create database and container
+async def get_or_create_container(client, database_id, container_id, partition_key):
-# Create or get a reference to a database
-try:
- database = client.create_database_if_not_exists(id=database_id)
+ database = await client.create_database_if_not_exists(id=database_id)
print(f'Database "{database_id}" created or retrieved successfully.')
-except exceptions.CosmosResourceExistsError:
- database = client.get_database_client(database_id)
- print('Database with id \'{0}\' was found'.format(database_id))
-
-# Create or get a reference to a container
-try:
- container = database.create_container(id=container_id, partition_key=PartitionKey(path='/partitionKey'))
- print('Container with id \'{0}\' created'.format(container_id))
-
-except exceptions.CosmosResourceExistsError:
- container = database.get_container_client(container_id)
- print('Container with id \'{0}\' was found'.format(container_id))
-
-# Sample document data
-sample_document = {
- "id": "1",
- "name": "Doe Smith",
- "city": "New York",
- "partition_key": "NY"
-}
-
-# Insert a document
-container.create_item(body=sample_document)
-
-# Query for documents
-query = "SELECT * FROM c where c.id = 1"
-items = list(container.query_items(query, enable_cross_partition_query=True))
+ container = await database.create_container_if_not_exists(id=container_id, partition_key=PartitionKey(path=partition_key))
+ print(f'Container with id "{container_id}" created')
+
+ return container
+
+async def create_products():
+ async with CosmosClient(endpoint, credential=key) as client:
+ container = await get_or_create_container(client, database_id, container_id, partition_key)
+ for i in range(10):
+ await container.upsert_item({
+ 'id': f'item{i}',
+ 'productName': 'Widget',
+ 'productModel': f'Model {i}'
+ })
+
+async def get_products():
+ items = []
+ async with CosmosClient(endpoint, credential=key) as client:
+ container = await get_or_create_container(client, database_id, container_id, partition_key)
+ async for item in container.read_all_items():
+ items.append(item)
+ return items
+
+async def main():
+ await create_products()
+ products = await get_products()
+ print(products)
+
+if __name__ == "__main__":
+ asyncio.run(main())
``` ::: zone pivot="python-mode-decorators"
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
To learn more about how to send device-to-cloud telemetry, and the other way aro
[C# script]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx [create a storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal [Create an Azure storage account]: #create-an-azure-storage-account
-[create an IoT hub]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub
+[create an IoT hub]: ../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub
[Create a function and add an Event Grid subscription]: #create-a-function-and-add-an-event-grid-subscription [free account]: https://azure.microsoft.com/free/ [general-purpose v2 storage account]: ../storage/common/storage-account-overview.md
To learn more about how to send device-to-cloud telemetry, and the other way aro
[resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups [the root of the sample]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing [Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
-[Send telemetry from a device]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp
+[Send telemetry from a device]: ../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-csharp
[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Upload a geofence into your Azure storage account]: #upload-a-geofence-into-your-azure-storage-account
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
description: Learn about Microsoft Azure Maps Transactions Previously updated : 09/22/2023 Last updated : 04/05/2024
The following table summarizes the Azure Maps services that generate transaction
| Data service (Deprecated<sup>1</sup>) | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| | [Data registry] | Yes | One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| | [Geolocation]| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|
-| [Render] | Yes, except for Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
+| [Render] | Yes, except Get Copyright API, Get Attribution API and Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
| [Route] | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Search v1]<br>[Search v2] | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Spatial] | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are nonbillable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
azure-monitor Alerts Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-automatic-migration.md
- Title: Understand how the automatic migration process for your Azure Monitor classic alerts works
-description: Learn how the automatic migration process works.
--- Previously updated : 06/20/2023--
-# Understand the automatic migration process for your classic alert rules
-
-As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
-
-A migration tool is available in the Azure portal for customers to trigger migration themselves. This article explains the automatic migration process in public cloud, that will start after 31 May 2021. It also details issues and solutions you might run into.
-
-## Important things to note
-
-The migration process converts classic alert rules to new, equivalent alert rules, and creates action groups. In preparation, be aware of the following points:
--- The notification payload formats for new alert rules are different from payloads of the classic alert rules because they support more features. If you have a classic alert rule with logic apps, runbooks, or webhooks, they might stop functioning as expected after migration, because of differences in payload. [Learn how to prepare for the migration](alerts-prepare-migration.md).--- Some classic alert rules can't be migrated by using the tool. [Learn which rules can't be migrated and what to do with them](alerts-understand-migration.md#manually-migrating-classic-alerts-to-newer-alerts).-
-## What will happen during the automatic migration process in public cloud?
--- Starting 31 May 2021, you won't be able to create any new classic alert rules and migration of classic alerts will be triggered in batches.-- Any classic alert rules that are monitoring deleted target resources or on [metrics that are no longer supported](alerts-understand-migration.md#classic-alert-rules-on-deprecated-metrics) are considered invalid.-- Classic alert rules that are invalid will be removed sometime after 31 May 2021.-- Once migration for your subscription starts, it should be complete within an hour. Customers can monitor the status of migration on [the migration tool in Azure Monitor](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/MigrationBladeViewModel).-- Subscription owners will receive an email on success or failure of the migration.-
- > [!NOTE]
- > If you don't want to wait for the automatic migration process to start, you can still trigger the migration voluntarily using the migration tool.
-
-## What if the automatic migration fails?
-
-When the automatic migration process fails, subscription owners will receive an email notifying them of the issue. You can use the migration tool in Azure Monitor to see the full details of the issue. See the [troubleshooting guide](alerts-understand-migration.md#common-problems-and-remedies) for help with problems you might face during migration.
-
- > [!NOTE]
- > In case an action is needed from customers, like temporarily disabling a resource lock or changing a policy assignment, customers will need to resolve any such issues. If the issues are not resolved by then, successful migration of your classic alerts cannot be guaranteed.
-
-## Next steps
--- [Prepare for the migration](alerts-prepare-migration.md)-- [Understand how the migration tool works](alerts-understand-migration.md)
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
- Title: Create and manage classic metric alerts using Azure Monitor
-description: Learn how to use Azure portal or PowerShell to create, view and manage classic metric alert rules.
--- Previously updated : 06/20/2023---
-# Create, view, and manage classic metric alerts using Azure Monitor
-
-> [!WARNING]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
->
-
-Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics crosses a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There's an existing newer functionality called Metric alerts, which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we'll describe how to create, view and manage classic metric alert rules through Azure portal and PowerShell.
-
-## With Azure portal
-
-1. In the [portal](https://portal.azure.com/), locate the resource that you want to monitor, and then select it.
-
-2. In the **MONITORING** section, select **Alerts (Classic)**. The text and icon might vary slightly for different resources. If you don't find **Alerts (Classic)** here, you might find it in **Alerts** or **Alert Rules**.
-
- :::image type="content" source="media/alerts-classic-portal/AlertRulesButton.png" lightbox="media/alerts-classic-portal/AlertRulesButton.png" alt-text="Monitoring":::
-
-3. Select the **Add metric alert (classic)** command, and then fill in the fields.
-
- :::image type="content" source="media/alerts-classic-portal/AddAlertOnlyParamsPage.png" lightbox="media/alerts-classic-portal/AddAlertOnlyParamsPage.png" alt-text="Add Alert":::
-
-4. **Name** your alert rule. Then choose a **Description**, which also appears in notification emails.
-
-5. Select the **Metric** that you want to monitor. Then choose a **Condition** and **Threshold** value for the metric. Also choose the **Period** of time that the metric rule must be satisfied before the alert triggers. For example, if you use the period "Over the last 5 minutes" and your alert looks for a CPU above 80%, the alert triggers when the CPU has been consistently above 80% for 5 minutes. After the first trigger occurs, it triggers again when the CPU stays below 80% for 5 minutes. The CPU metric measurement happens every minute.
-
-6. Select **Email owners...** if you want administrators and co-administrators to receive email notifications when the alert fires.
-
-7. If you want to send notifications to additional email addresses when the alert fires, add them in the **Additional Administrator email(s)** field. Separate multiple emails with semicolons, in the following format: *email\@contoso.com;email2\@contoso.com*
-
-8. Put in a valid URI in the **Webhook** field if you want it to be called when the alert fires.
-
-9. If you use Azure Automation, you can select a runbook to be run when the alert fires.
-
-10. Select **OK** to create the alert.
-
-Within a few minutes, the alert is active and triggers as previously described.
-
-After you create an alert, you can select it and do one of the following tasks:
-
-* View a graph that shows the metric threshold and the actual values from the previous day.
-* Edit or delete it.
-* **Disable** or **Enable** it if you want to temporarily stop or resume receiving notifications for that alert.
-
-## With PowerShell
--
-This section shows how to use PowerShell commands create, view and manage classic metric alerts.The examples in the article illustrate how you can use Azure Monitor cmdlets for classic metric alerts.
-
-1. If you haven't already, set up PowerShell to run on your computer. For more information, see [How to Install and Configure PowerShell](/powershell/azure/). You can also review the entire list of Azure Monitor PowerShell cmdlets at [Azure Monitor (Insights) Cmdlets](/powershell/module/az.applicationinsights).
-
-2. First, log in to your Azure subscription.
-
- ```powershell
- Connect-AzAccount
- ```
-
-3. You'll see a sign in screen. Once you sign in your Account, TenantID, and default Subscription ID are displayed. All the Azure cmdlets work in the context of your default subscription. To view the list of subscriptions you have access to, use the following command:
-
- ```powershell
- Get-AzSubscription
- ```
-
-4. To change your working context to a different subscription, use the following command:
-
- ```powershell
- Set-AzContext -SubscriptionId <subscriptionid>
- ```
-
-5. You can retrieve all classic metric alert rules on a resource group:
-
- ```powershell
- Get-AzAlertRule -ResourceGroup montest
- ```
-
-6. You can view details of a classic metric alert rule
-
- ```powershell
- Get-AzAlertRule -Name simpletestCPU -ResourceGroup montest -DetailedOutput
- ```
-
-7. You can retrieve all alert rules set for a target resource. For example, all alert rules set on a VM.
-
- ```powershell
- Get-AzAlertRule -ResourceGroup montest -TargetResourceId /subscriptions/s1/resourceGroups/montest/providers/Microsoft.Compute/virtualMachines/testconfig
- ```
-
-8. Classic alert rules can no longer be created via PowerShell. Use the new ['Add-AzMetricAlertRuleV2'](/powershell/module/az.monitor/add-azmetricalertrulev2) command to create a metric alert rule instead.
-
-## Next steps
--- [Create a classic metric alert with a Resource Manager template](./alerts-enable-template.md).-- [Have a classic metric alert notify a non-Azure system using a webhook](./alerts-webhooks.md).
azure-monitor Alerts Classic.Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic.overview.md
- Title: Overview of classic alerts in Azure Monitor
-description: Classic alerts will be deprecated. Alerts enable you to monitor Azure resource metrics, events, or logs, and they notify you when a condition you specify is met.
--- Previously updated : 06/20/2023--
-# What are classic alerts in Azure?
-
-> [!NOTE]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [near real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **February 29, 2024**.
->
-
-Alerts allow you to configure conditions over data, and they notify you when the conditions match the latest monitoring data.
-
-## Old and new alerting capabilities
-
-In the past, Azure Monitor, Application Insights, Log Analytics, and Service Health had separate alerting capabilities. Over time, Azure improved and combined both the user interface and different methods of alerting. The consolidation is still in process.
-
-You can view classic alerts only on the classic alerts user screen in the Azure portal. To see this screen, select **View classic alerts** on the **Alerts** screen.
-
- :::image type="content" source="media/alerts-classic.overview/monitor-alert-screen2.png" lightbox="media/alerts-classic.overview/monitor-alert-screen2.png" alt-text="Screenshot that shows alert choices in the Azure portal.":::
-
-The new alerts user experience has the following benefits over the classic alerts experience:
-- **Better notification system:** All newer alerts use action groups. You can reuse these named groups of notifications and actions in multiple alerts. Classic metric alerts and older Log Analytics alerts don't use action groups.-- **A unified authoring experience:** All alert creation for metrics, logs, and activity logs across Azure Monitor, Log Analytics, and Application Insights is in one place.-- **View fired Log Analytics alerts in the Azure portal:** You can now also see fired Log Analytics alerts in your subscription. Previously, these alerts were in a separate portal.-- **Separation of fired alerts and alert rules:** Alert rules (the definition of condition that triggers an alert) and fired alerts (an instance of the alert rule firing) are differentiated. Now the operational and configuration views are separated.-- **Better workflow:** The new alerts authoring experience guides the user along the process of configuring an alert rule. This change makes it simpler to discover the right things to get alerted on.-- **Smart alerts consolidation and setting alert state:** Newer alerts include auto grouping functionality that shows similar alerts together to reduce overload in the user interface.-
-The newer metric alerts have the following benefits over the classic metric alerts:
-- **Improved latency:** Newer metric alerts can run as frequently as every minute. Older metric alerts always run at a frequency of 5 minutes. Newer alerts have increasing smaller delay from issue occurrence to notification or action (3 to 5 minutes). Older alerts are 5 to 15 minutes depending on the type. Log alerts typically have a delay of 10 minutes to 15 minutes because of the time it takes to ingest the logs. Newer processing methods are reducing that time.-- **Support for multidimensional metrics:** You can alert on dimensional metrics. Now you can monitor an interesting segment of the metric.-- **More control over metric conditions:** You can define richer alert rules. The newer alerts support monitoring the maximum, minimum, average, and total values of metrics.-- **Combined monitoring of multiple metrics:** You can monitor multiple metrics (currently, up to two metrics) with a single rule. An alert triggers if both metrics breach their respective thresholds for the specified time period.-- **Better notification system:** All newer alerts use [action groups](./action-groups.md). You can reuse these named groups of notifications and actions in multiple alerts. Classic metric alerts and older Log Analytics alerts don't use action groups.-- **Metrics from logs (preview):** You can now extract and convert log data that goes into Log Analytics into Azure Monitor metrics and then alert on it like other metrics. For the terminology specific to classic alerts, see [Alerts (classic)]().-
-## Classic alerts on Azure Monitor data
-Two types of classic alerts are available:
-
-* **Classic metric alerts**: This alert triggers when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when that threshold is crossed and the alert condition is met. At that point, the alert is considered "Activated." It generates another notification when it's "Resolved," that is, when the threshold is crossed again and the condition is no longer met.
-* **Classic activity log alerts**: A streaming log alert that triggers on an activity log event entry that matches your filter criteria. These alerts have only one state: "Activated." The alert engine applies the filter criteria to any new event. It doesn't search to find older entries. These alerts can notify you when a new Service Health incident occurs or when a user or application performs an operation in your subscription. An example of an operation might be "Delete virtual machine."
-
-For resource log data available through Azure Monitor, route the data into Log Analytics and use a log query alert. Log Analytics now uses the [new alerting method](./alerts-overview.md).
-
-The following diagram summarizes sources of data in Azure Monitor and, conceptually, how you can alert off of that data.
--
-## Taxonomy of alerts (classic)
-Azure uses the following terms to describe classic alerts and their functions:
-* **Alert**: A definition of criteria (one or more rules or conditions) that becomes activated when met.
-* **Active**: The state when the criteria defined by a classic alert are met.
-* **Resolved**: The state when the criteria defined by a classic alert are no longer met after they were previously met.
-* **Notification**: The action taken based off of a classic alert becoming active.
-* **Action**: A specific call sent to a receiver of a notification (for example, emailing an address or posting to a webhook URL). Notifications can usually trigger multiple actions.
-
-## How do I receive a notification from an Azure Monitor classic alert?
-Historically, Azure alerts from different services used their own built-in notification methods.
-
-Azure Monitor created a reusable notification grouping called *action groups*. Action groups specify a set of receivers for a notification. Any time an alert is activated that references the action group, all receivers receive that notification. With action groups, you can reuse a grouping of receivers (for example, your on-call engineer list) across many alert objects.
-
-Action groups support notification by posting to a webhook URL and to email addresses, SMS numbers, and several other actions. For more information, see [Action groups](./action-groups.md).
-
-Older classic activity log alerts use action groups. But the older metric alerts don't use action groups. Instead, you can configure the following actions:
--- Send email notifications to the service administrator, co-administrators, or other email addresses that you specify.-- Call a webhook, which enables you to launch other automation actions.-
-Webhooks enable automation and remediation, for example, by using:
-- Azure Automation runbooks-- Azure Functions-- Azure Logic Apps-- A third-party service-
-## Next steps
-Get information about alert rules and how to configure them:
-
-* Learn more about [metrics](../data-platform.md).
-* Configure [classic metric alerts via the Azure portal](alerts-classic-portal.md).
-* Configure [classic metric alerts via PowerShell](alerts-classic-portal.md).
-* Configure [classic metric alerts via the command-line interface (CLI)](alerts-classic-portal.md).
-* Configure [classic metric alerts via the Azure Monitor REST API](/rest/api/monitor/alertrules).
-* Learn more about [activity logs](../essentials/platform-logs-overview.md).
-* Configure [activity log alerts via the Azure portal](./activity-log-alerts.md).
-* Configure [activity log alerts via Azure Resource Manager](./alerts-activity-log.md).
-* Review the [activity log alert webhook schema](activity-log-alerts-webhook.md).
-* Learn more about [action groups](./action-groups.md).
-* Configure [newer alerts](alerts-metric.md).
azure-monitor Alerts Enable Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-enable-template.md
- Title: Resource Manager template - create metric alert
-description: Learn how to use a Resource Manager template to create a classic metric alert to receive notifications by email or webhook.
-- Previously updated : 05/28/2023---
-# Create a classic metric alert rule with a Resource Manager template
-
-> [!WARNING]
-> This article describes how to create older classic metric alert rules. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
->
-
-This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure Azure classic metric alert rules. This enables you to automatically set up alert rules on your resources when they are created to ensure that all resources are monitored correctly.
-
-The basic steps are as follows:
-
-1. Create a template as a JSON file that describes how to create the alert rule.
-2. [Deploy the template using any deployment method](../../azure-resource-manager/templates/deploy-powershell.md).
-
-Below we describe how to create a Resource Manager template first for an alert rule alone, then for an alert rule during the creation of another resource.
-
-## Resource Manager template for a classic metric alert rule
-To create an alert rule using a Resource Manager template, you create a resource of type `Microsoft.Insights/alertRules` and fill in all related properties. Below is a template that creates an alert rule.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "alertName": {
- "type": "string",
- "metadata": {
- "description": "Name of alert"
- }
- },
- "alertDescription": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Description of alert"
- }
- },
- "isEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether alerts are enabled"
- }
- },
- "resourceId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Resource ID of the resource emitting the metric that will be used for the comparison."
- }
- },
- "metricName": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Name of the metric used in the comparison to activate the alert."
- }
- },
- "operator": {
- "type": "string",
- "defaultValue": "GreaterThan",
- "allowedValues": [
- "GreaterThan",
- "GreaterThanOrEqual",
- "LessThan",
- "LessThanOrEqual"
- ],
- "metadata": {
- "description": "Operator comparing the current value with the threshold value."
- }
- },
- "threshold": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The threshold value at which the alert is activated."
- }
- },
- "aggregation": {
- "type": "string",
- "defaultValue": "Average",
- "allowedValues": [
- "Average",
- "Last",
- "Maximum",
- "Minimum",
- "Total"
- ],
- "metadata": {
- "description": "How the data that is collected should be combined over time."
- }
- },
- "windowSize": {
- "type": "string",
- "defaultValue": "PT5M",
- "metadata": {
- "description": "Period of time used to monitor alert activity based on the threshold. Must be between five minutes and one day. ISO 8601 duration format."
- }
- },
- "sendToServiceOwners": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Specifies whether alerts are sent to service owners"
- }
- },
- "customEmailAddresses": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "Comma-delimited email addresses where the alerts are also sent"
- }
- },
- "webhookUrl": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "URL of a webhook that will receive an HTTP POST when the alert activates."
- }
- }
- },
- "variables": {
- "customEmails": "[split(parameters('customEmailAddresses'), ',')]"
- },
- "resources": [
- {
- "type": "Microsoft.Insights/alertRules",
- "name": "[parameters('alertName')]",
- "location": "[resourceGroup().location]",
- "apiVersion": "2016-03-01",
- "properties": {
- "name": "[parameters('alertName')]",
- "description": "[parameters('alertDescription')]",
- "isEnabled": "[parameters('isEnabled')]",
- "condition": {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
- "dataSource": {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
- "resourceUri": "[parameters('resourceId')]",
- "metricName": "[parameters('metricName')]"
- },
- "operator": "[parameters('operator')]",
- "threshold": "[parameters('threshold')]",
- "windowSize": "[parameters('windowSize')]",
- "timeAggregation": "[parameters('aggregation')]"
- },
- "actions": [
- {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
- "sendToServiceOwners": "[parameters('sendToServiceOwners')]",
- "customEmails": "[variables('customEmails')]"
- },
- {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleWebhookAction",
- "serviceUri": "[parameters('webhookUrl')]",
- "properties": {}
- }
- ]
- }
- }
- ]
-}
-```
-
-An explanation of the schema and properties for an alert rule [is available here](/rest/api/monitor/alertrules).
-
-## Resource Manager template for a resource with a classic metric alert rule
-An alert rule on a Resource Manager template is most often useful when creating an alert rule while creating a resource. For example, you may want to ensure that a ΓÇ£CPU % > 80ΓÇ¥ rule is set up every time you deploy a Virtual Machine. To do this, you add the alert rule as a resource in the resource array for your VM template and add a dependency using the `dependsOn` property to the VM resource ID. HereΓÇÖs a full example that creates a Windows VM and adds an alert rule that notifies subscription admins when the CPU utilization goes above 80%.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "newStorageAccountName": {
- "type": "string",
- "metadata": {
- "Description": "The name of the storage account where the VM disk is stored."
- }
- },
- "adminUsername": {
- "type": "string",
- "metadata": {
- "Description": "The name of the administrator account on the VM."
- }
- },
- "adminPassword": {
- "type": "securestring",
- "metadata": {
- "Description": "The administrator account password on the VM."
- }
- },
- "dnsNameForPublicIP": {
- "type": "string",
- "metadata": {
- "Description": "The name of the public IP address used to access the VM."
- }
- }
- },
- "variables": {
- "location": "Central US",
- "imagePublisher": "MicrosoftWindowsServer",
- "imageOffer": "WindowsServer",
- "windowsOSVersion": "2012-R2-Datacenter",
- "OSDiskName": "osdisk1",
- "nicName": "nc1",
- "addressPrefix": "10.0.0.0/16",
- "subnetName": "sn1",
- "subnetPrefix": "10.0.0.0/24",
- "storageAccountType": "Standard_LRS",
- "publicIPAddressName": "ip1",
- "publicIPAddressType": "Dynamic",
- "vmStorageAccountContainerName": "vhds",
- "vmName": "vm1",
- "vmSize": "Standard_A0",
- "virtualNetworkName": "vn1",
- "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
- "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
- "vmID":"[resourceId('Microsoft.Compute/virtualMachines',variables('vmName'))]",
- "alertName": "highCPUOnVM",
- "alertDescription":"CPU is over 80%",
- "alertIsEnabled": true,
- "resourceId": "",
- "metricName": "Percentage CPU",
- "operator": "GreaterThan",
- "threshold": "80",
- "windowSize": "PT5M",
- "aggregation": "Average",
- "customEmails": "",
- "sendToServiceOwners": true,
- "webhookUrl": "http://testwebhook.test"
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[parameters('newStorageAccountName')]",
- "apiVersion": "2015-06-15",
- "location": "[variables('location')]",
- "properties": {
- "accountType": "[variables('storageAccountType')]"
- }
- },
- {
- "apiVersion": "2016-03-30",
- "type": "Microsoft.Network/publicIPAddresses",
- "name": "[variables('publicIPAddressName')]",
- "location": "[variables('location')]",
- "properties": {
- "publicIPAllocationMethod": "[variables('publicIPAddressType')]",
- "dnsSettings": {
- "domainNameLabel": "[parameters('dnsNameForPublicIP')]"
- }
- }
- },
- {
- "apiVersion": "2016-03-30",
- "type": "Microsoft.Network/virtualNetworks",
- "name": "[variables('virtualNetworkName')]",
- "location": "[variables('location')]",
- "properties": {
- "addressSpace": {
- "addressPrefixes": [
- "[variables('addressPrefix')]"
- ]
- },
- "subnets": [
- {
- "name": "[variables('subnetName')]",
- "properties": {
- "addressPrefix": "[variables('subnetPrefix')]"
- }
- }
- ]
- }
- },
- {
- "apiVersion": "2016-03-30",
- "type": "Microsoft.Network/networkInterfaces",
- "name": "[variables('nicName')]",
- "location": "[variables('location')]",
- "dependsOn": [
- "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]",
- "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
- ],
- "properties": {
- "ipConfigurations": [
- {
- "name": "ipconfig1",
- "properties": {
- "privateIPAllocationMethod": "Dynamic",
- "publicIPAddress": {
- "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]"
- },
- "subnet": {
- "id": "[variables('subnetRef')]"
- }
- }
- }
- ]
- }
- },
- {
- "apiVersion": "2016-03-30",
- "type": "Microsoft.Compute/virtualMachines",
- "name": "[variables('vmName')]",
- "location": "[variables('location')]",
- "dependsOn": [
- "[concat('Microsoft.Storage/storageAccounts/', parameters('newStorageAccountName'))]",
- "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
- ],
- "properties": {
- "hardwareProfile": {
- "vmSize": "[variables('vmSize')]"
- },
- "osProfile": {
- "computername": "[variables('vmName')]",
- "adminUsername": "[parameters('adminUsername')]",
- "adminPassword": "[parameters('adminPassword')]"
- },
- "storageProfile": {
- "imageReference": {
- "publisher": "[variables('imagePublisher')]",
- "offer": "[variables('imageOffer')]",
- "sku": "[variables('windowsOSVersion')]",
- "version": "latest"
- },
- "osDisk": {
- "name": "osdisk",
- "vhd": {
- "uri": "[concat('http://',parameters('newStorageAccountName'),'.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/',variables('OSDiskName'),'.vhd')]"
- },
- "caching": "ReadWrite",
- "createOption": "FromImage"
- }
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
- }
- ]
- }
- }
- },
- {
- "type": "Microsoft.Insights/alertRules",
- "name": "[variables('alertName')]",
- "dependsOn": [
- "[variables('vmID')]"
- ],
- "location": "[variables('location')]",
- "apiVersion": "2016-03-01",
- "properties": {
- "name": "[variables('alertName')]",
- "description": "variables('alertDescription')",
- "isEnabled": "[variables('alertIsEnabled')]",
- "condition": {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
- "dataSource": {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
- "resourceUri": "[variables('vmID')]",
- "metricName": "[variables('metricName')]"
- },
- "operator": "[variables('operator')]",
- "threshold": "[variables('threshold')]",
- "windowSize": "[variables('windowSize')]",
- "timeAggregation": "[variables('aggregation')]"
- },
- "actions": [
- {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
- "sendToServiceOwners": "[variables('sendToServiceOwners')]",
- "customEmails": "[variables('customEmails')]"
- },
- {
- "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleWebhookAction",
- "serviceUri": "[variables('webhookUrl')]",
- "properties": {}
- }
- ]
- }
- }
- ]
-}
-```
-
-## Next Steps
-* [Read more about Alerts](./alerts-overview.md)
-* [Add Diagnostic Settings](../essentials/resource-manager-diagnostic-settings.md) to your Resource Manager template
-* For the JSON syntax and properties, see [Microsoft.Insights/alertrules](/azure/templates/microsoft.insights/alertrules) template reference.
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
- Title: Update logic apps & runbooks for alerts migration
-description: Learn how to modify your webhooks, logic apps, and runbooks to prepare for voluntary migration.
-- Previously updated : 06/20/2023---
-# Prepare your logic apps and runbooks for migration of classic alert rules
-
-> [!NOTE]
-> As [previously announced](monitoring-classic-retirement.md), classic alerts in Azure Monitor are retired for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
->
-
-If you choose to voluntarily migrate your classic alert rules to new alert rules, there are some differences between the two systems. This article explains those differences and how you can prepare for the change.
-
-## API changes
-
-The APIs that create and manage classic alert rules (`microsoft.insights/alertrules`) are different from the APIs that create and manage new metric alerts (`microsoft.insights/metricalerts`). If you programmatically create and manage classic alert rules today, update your deployment scripts to work with the new APIs.
-
-The following table is a reference to the programmatic interfaces for both classic and new alerts:
-
-| Deployment script type | Classic alerts | New metric alerts |
-| - | -- | -- |
-|REST API | [microsoft.insights/alertrules](/rest/api/monitor/alertrules) | [microsoft.insights/metricalerts](/rest/api/monitor/metricalerts) |
-|Azure CLI | `az monitor alert` | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
-|PowerShell | [Reference](/powershell/module/az.monitor/add-azmetricalertrule) | [Reference](/powershell/module/az.monitor/add-azmetricalertrulev2) |
-| Azure Resource Manager template | [For classic alerts](./alerts-enable-template.md)|[For new metric alerts](./alerts-metric-create-templates.md)|
-
-## Notification payload changes
-
-The notification payload format is slightly different between [classic alert rules](alerts-webhooks.md) and [new metric alerts](alerts-metric-near-real-time.md#payload-schema). If you have classic alert rules with webhook, logic app, or runbook actions, you must update the targets to accept the new payload format.
-
-Use the following table to map the webhook payload fields from the classic format to the new format:
-
-| Notification endpoint type | Classic alerts | New metric alerts |
-| -- | -- | -- |
-|Was the alert activated or resolved? | **status** | **data.status** |
-|Contextual information about the alert | **context** | **data.context** |
-|Time stamp at which the alert was activated or resolved | **context.timestamp** | **data.context.timestamp** |
-| Alert rule ID | **context.id** | **data.context.id** |
-| Alert rule name | **context.name** | **data.context.name** |
-| Description of the alert rule | **context.description** | **data.context.description** |
-| Alert rule condition | **context.condition** | **data.context.condition** |
-| Metric name | **context.condition.metricName** | **data.context.condition.allOf[0].metricName** |
-| Time aggregation (how the metric is aggregated over the evaluation window)| **context.condition.timeAggregation** | **context.condition.timeAggregation** |
-| Evaluation period | **context.condition.windowSize** | **data.context.condition.windowSize** |
-| Operator (how the aggregated metric value is compared against the threshold) | **context.condition.operator** | **data.context.condition.operator** |
-| Threshold | **context.condition.threshold** | **data.context.condition.allOf[0].threshold** |
-| Metric value | **context.condition.metricValue** | **data.context.condition.allOf[0].metricValue** |
-| Subscription ID | **context.subscriptionId** | **data.context.subscriptionId** |
-| Resource group of the affected resource | **context.resourceGroup** | **data.context.resourceGroup** |
-| Name of the affected resource | **context.resourceName** | **data.context.resourceName** |
-| Type of the affected resource | **context.resourceType** | **data.context.resourceType** |
-| Resource ID of the affected resource | **context.resourceId** | **data.context.resourceId** |
-| Direct link to the portal resource summary page | **context.portalLink** | **data.context.portalLink** |
-| Custom payload fields to be passed to the webhook or logic app | **properties** | **data.properties** |
-
-The payloads are similar, as you can see. The following section offers:
--- Details about modifying logic apps to work with the new format.-- A runbook example that parses the notification payload for new alerts.-
-## Modify a logic app to receive a metric alert notification
-
-If you're using logic apps with classic alerts, you must modify your logic-app code to parse the new metric alerts payload. Follow these steps:
-
-1. Create a new logic app.
-
-1. Use the template "Azure Monitor - Metrics Alert Handler". This template has an **HTTP request** trigger with the appropriate schema defined.
-
- :::image type="content" source="media/alerts-prepare-migration/logic-app-template.png" lightbox="media/alerts-prepare-migration/logic-app-template.png" alt-text="Screenshot shows two buttons, Blank Logic App and Azure Monitor ΓÇô Metrics Alert Handler.":::
-
-1. Add an action to host your processing logic.
-
-## Use an automation runbook that receives a metric alert notification
-
-The following example provides PowerShell code to use in your runbook. This code can parse the payloads for both classic metric alert rules and new metric alert rules.
-
-```PowerShell
-## Example PowerShell code to use in a runbook to handle parsing of both classic and new metric alerts.
-
-[OutputType("PSAzureOperationResponse")]
-
-param
-(
- [Parameter (Mandatory=$false)]
- [object] $WebhookData
-)
-
-$ErrorActionPreference = "stop"
-
-if ($WebhookData)
-{
- # Get the data object from WebhookData.
- $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
-
- # Determine whether the alert triggering the runbook is a classic metric alert or a new metric alert (depends on the payload schema).
- $schemaId = $WebhookBody.schemaId
- Write-Verbose "schemaId: $schemaId" -Verbose
- if ($schemaId -eq "AzureMonitorMetricAlert") {
-
- # This is the new metric alert schema.
- $AlertContext = [object] ($WebhookBody.data).context
- $status = ($WebhookBody.data).status
-
- # Parse fields related to alert rule condition.
- $metricName = $AlertContext.condition.allOf[0].metricName
- $metricValue = $AlertContext.condition.allOf[0].metricValue
- $threshold = $AlertContext.condition.allOf[0].threshold
- $timeAggregation = $AlertContext.condition.allOf[0].timeAggregation
- }
- elseif ($schemaId -eq $null) {
- # This is the classic metric alert schema.
- $AlertContext = [object] $WebhookBody.context
- $status = $WebhookBody.status
-
- # Parse fields related to alert rule condition.
- $metricName = $AlertContext.condition.metricName
- $metricValue = $AlertContext.condition.metricValue
- $threshold = $AlertContext.condition.threshold
- $timeAggregation = $AlertContext.condition.timeAggregation
- }
- else {
- # The schema is neither a classic metric alert nor a new metric alert.
- Write-Error "The alert data schema - $schemaId - is not supported."
- }
-
- # Parse fields related to resource affected.
- $ResourceName = $AlertContext.resourceName
- $ResourceType = $AlertContext.resourceType
- $ResourceGroupName = $AlertContext.resourceGroupName
- $ResourceId = $AlertContext.resourceId
- $SubId = $AlertContext.subscriptionId
-
- ## Your logic to handle the alert here.
-}
-else {
- # Error
- Write-Error "This runbook is meant to be started from an Azure alert webhook only."
-}
-
-```
-
-For a full example of a runbook that stops a virtual machine when an alert is triggered, see the [Azure Automation documentation](../../automation/automation-create-alert-triggered-runbook.md).
-
-## Partner integration via webhooks
-
-Most of our partners that integrate with classic alerts already support newer metric alerts through their integrations. Known integrations that already work with new metric alerts include:
--- [PagerDuty](https://www.pagerduty.com/docs/guides/azure-integration-guide/)-- [OpsGenie](https://docs.opsgenie.com/docs/microsoft-azure-integration)-- [Signl4](https://www.signl4.com/blog/mobile-alert-notifications-azure-monitor/)-
-If you're using a partner integration that's not listed here, confirm with the provider that they work with new metric alerts.
-
-## Next steps
--- [Understand how the migration tool works](alerts-understand-migration.md)
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
- Title: Understand migration for Azure Monitor alerts
-description: Understand how the alerts migration works and troubleshoot problems.
-- Previously updated : 06/20/2023---
-# Understand migration options to newer alerts
-
-Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
-
-This article explains how the manual migration and voluntary migration tool work, which will be used to migrate remaining alert rules. It also describes solutions for some common problems.
-
-> [!IMPORTANT]
-> Activity log alerts (including Service health alerts) and log search alerts are not impacted by the migration. The migration only applies to classic alert rules described [here](./monitoring-classic-retirement.md#retirement-of-classic-monitoring-and-alerting-platform).
-
-> [!NOTE]
-> If your classic alert rules are invalid i.e. they are on [deprecated metrics](#classic-alert-rules-on-deprecated-metrics) or resources that have been deleted, they will not be migrated and will not be available after service is retired.
-
-## Manually migrating classic alerts to newer alerts
-
-Customers that are interested in manually migrating their remaining alerts can already do so using the following sections. It also includes metrics that are retired and so cannot be migrated directly.
-
-### Guest metrics on virtual machines
-
-Before you can create new metric alerts on guest metrics, the guest metrics must be sent to the Azure Monitor logs store. Follow these instructions to create alerts:
--- [Enabling guest metrics collection to log analytics](../agents/agent-data-sources.md)-- [Creating log search alerts in Azure Monitor](./alerts-log.md)-
-There are more options to collect guest metrics and alert on them, [learn more](../agents/agents-overview.md).
-
-### Storage and Classic Storage account metrics
-
-All classic alerts on storage accounts can be migrated except alerts on these metrics:
--- PercentAuthorizationError-- PercentClientOtherError-- PercentNetworkError-- PercentServerOtherError-- PercentSuccess-- PercentThrottlingError-- PercentTimeoutError-- AnonymousThrottlingError-- SASThrottlingError-- ThrottlingError-
-Classic alert rules on Percent metrics must be migrated based on [the mapping between old and new storage metrics](../../storage/common/storage-metrics-migration.md#metrics-mapping-between-old-metrics-and-new-metrics). Thresholds will need to be modified appropriately because the new metric available is an absolute one.
-
-Classic alert rules on AnonymousThrottlingError, SASThrottlingError, and ThrottlingError must be split into two new alerts because there's no combined metric that provides the same functionality. Thresholds will need to be adapted appropriately.
-
-### Azure Cosmos DB metrics
-
-All classic alerts on Azure Cosmos DB metrics can be migrated except alerts on these metrics:
--- Average Requests per Second-- Consistency Level-- Http 2xx-- Http 3xx-- Max RUPM Consumed Per Minute-- Max RUs Per Second-- Mongo Other Request Charge-- Mongo Other Request Rate-- Observed Read Latency-- Observed Write Latency-- Service Availability-- Storage Capacity-
-Average Requests per Second, Consistency Level, Max RUPM Consumed Per Minute, Max RUs Per Second, Observed Read Latency, Observed Write Latency, and Storage Capacity aren't currently available in the [new system](../essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts).
-
-Alerts on request metrics like Http 2xx, Http 3xx, and Service Availability aren't migrated because the way requests are counted is different between classic metrics and new metrics. Alerts on these metrics will need to be manually recreated with thresholds adjusted.
-
-### Classic alert rules on deprecated metrics
-
-The following are classic alert rules on metrics that were previously supported but were eventually deprecated. A small percentage of customer might have invalid classic alert rules on such metrics. Since these alert rules are invalid, they won't be migrated.
-
-| Resource type| Deprecated metric(s) |
-|-|-- |
-| Microsoft.DBforMySQL/servers | compute_consumption_percent, compute_limit |
-| Microsoft.DBforPostgreSQL/servers | compute_consumption_percent, compute_limit |
-| Microsoft.Network/publicIPAddresses | defaultddostriggerrate |
-| Microsoft.SQL/servers/databases | service_level_objective, storage_limit, storage_used, throttling, dtu_consumption_percent, storage_used |
-| Microsoft.Web/hostingEnvironments/multirolepools | averagememoryworkingset |
-| Microsoft.Web/hostingEnvironments/workerpools | bytesreceived, httpqueuelength |
-
-## How equivalent new alert rules and action groups are created
-
-The migration tool converts your classic alert rules to equivalent new alert rules and action groups. For most classic alert rules, equivalent new alert rules are on the same metric with the same properties such as `windowSize` and `aggregationType`. However, there are some classic alert rules are on metrics that have a different, equivalent metric in the new system. The following principles apply to the migration of classic alerts unless specified in the section below:
--- **Frequency**: Defines how often a classic or new alert rule checks for the condition. The `frequency` in classic alert rules wasn't configurable by the user and was always 5 mins for all resource types. Frequency of equivalent rules is also set to 5 min.-- **Aggregation Type**: Defines how the metric is aggregated over the window of interest. The `aggregationType` is also the same between classic alerts and new alerts for most metrics. In some cases, since the metric is different between classic alerts and new alerts, equivalent `aggregationType` or the `primary Aggregation Type` defined for the metric is used.-- **Units**: Property of the metric on which alert is created. Some equivalent metrics have different units. The threshold is adjusted appropriately as needed. For example, if the original metric has seconds as units but equivalent new metric has milliseconds as units, the original threshold is multiplied by 1000 to ensure same behavior.-- **Window Size**: Defines the window over which metric data is aggregated to compare against the threshold. For standard `windowSize` values like 5 mins, 15 mins, 30 mins, 1 hour, 3 hours, 6 hours, 12 hours, 1 day, there is no change made for equivalent new alert rule. For other values, the closest `windowSize` is used. For most customers, there's no effect with this change. For a small percentage of customers, there might be a need to tweak the threshold to get exact same behavior.-
-In the following sections, we detail the metrics that have a different, equivalent metric in the new system. Any metric that remains the same for classic and new alert rules isn't listed. You can find a list of metrics supported in the new system [here](../essentials/metrics-supported.md).
-
-### Microsoft.Storage/storageAccounts and Microsoft.ClassicStorage/storageAccounts
-
-For Storage account services like blob, table, file, and queue, the following metrics are mapped to equivalent metrics as shown below:
-
-| Metric in classic alerts | Equivalent metric in new alerts | Comments|
-|--|||
-| AnonymousAuthorizationError| Transactions metric with dimensions "ResponseType"="AuthorizationError" and "Authentication" = "Anonymous"| |
-| AnonymousClientOtherError | Transactions metric with dimensions "ResponseType"="ClientOtherError" and "Authentication" = "Anonymous" | |
-| AnonymousClientTimeOutError| Transactions metric with dimensions "ResponseType"="ClientTimeOutError" and "Authentication" = "Anonymous" | |
-| AnonymousNetworkError | Transactions metric with dimensions "ResponseType"="NetworkError" and "Authentication" = "Anonymous" | |
-| AnonymousServerOtherError | Transactions metric with dimensions "ResponseType"="ServerOtherError" and "Authentication" = "Anonymous" | |
-| AnonymousServerTimeOutError | Transactions metric with dimensions "ResponseType"="ServerTimeOutError" and "Authentication" = "Anonymous" | |
-| AnonymousSuccess | Transactions metric with dimensions "ResponseType"="Success" and "Authentication" = "Anonymous" | |
-| AuthorizationError | Transactions metric with dimensions "ResponseType"="AuthorizationError" | |
-| AverageE2ELatency | SuccessE2ELatency | |
-| AverageServerLatency | SuccessServerLatency | |
-| Capacity | BlobCapacity | Use `aggregationType` 'average' instead of 'last'. Metric only applies to Blob services |
-| ClientOtherError | Transactions metric with dimensions "ResponseType"="ClientOtherError" | |
-| ClientTimeoutError | Transactions metric with dimensions "ResponseType"="ClientTimeOutError" | |
-| ContainerCount | ContainerCount | Use `aggregationType` 'average' instead of 'last'. Metric only applies to Blob services |
-| NetworkError | Transactions metric with dimensions "ResponseType"="NetworkError" | |
-| ObjectCount | BlobCount| Use `aggregationType` 'average' instead of 'last'. Metric only applies to Blob services |
-| SASAuthorizationError | Transactions metric with dimensions "ResponseType"="AuthorizationError" and "Authentication" = "SAS" | |
-| SASClientOtherError | Transactions metric with dimensions "ResponseType"="ClientOtherError" and "Authentication" = "SAS" | |
-| SASClientTimeOutError | Transactions metric with dimensions "ResponseType"="ClientTimeOutError" and "Authentication" = "SAS" | |
-| SASNetworkError | Transactions metric with dimensions "ResponseType"="NetworkError" and "Authentication" = "SAS" | |
-| SASServerOtherError | Transactions metric with dimensions "ResponseType"="ServerOtherError" and "Authentication" = "SAS" | |
-| SASServerTimeOutError | Transactions metric with dimensions "ResponseType"="ServerTimeOutError" and "Authentication" = "SAS" | |
-| SASSuccess | Transactions metric with dimensions "ResponseType"="Success" and "Authentication" = "SAS" | |
-| ServerOtherError | Transactions metric with dimensions "ResponseType"="ServerOtherError" | |
-| ServerTimeOutError | Transactions metric with dimensions "ResponseType"="ServerTimeOutError" | |
-| Success | Transactions metric with dimensions "ResponseType"="Success" | |
-| TotalBillableRequests| Transactions | |
-| TotalEgress | Egress | |
-| TotalIngress | Ingress | |
-| TotalRequests | Transactions | |
-
-### Microsoft.DocumentDB/databaseAccounts
-
-For Azure Cosmos DB, equivalent metrics are as shown below:
-
-| Metric in classic alerts | Equivalent metric in new alerts | Comments|
-|--|||
-| AvailableStorage | AvailableStorage||
-| Data Size | DataUsage| |
-| Document Count | DocumentCount||
-| Index Size | IndexUsage||
-| Service Unavailable | ServiceAvailability||
-| TotalRequestUnits | TotalRequestUnits||
-| Throttled Requests | TotalRequests with dimension "StatusCode" = "429"| 'Average' aggregation type is corrected to 'Count'|
-| Internal Server Errors | TotalRequests with dimension "StatusCode" = "500"}| 'Average' aggregation type is corrected to 'Count'|
-| Http 401 | TotalRequests with dimension "StatusCode" = "401"| 'Average' aggregation type is corrected to 'Count'|
-| Http 400 | TotalRequests with dimension "StatusCode" = "400"| 'Average' aggregation type is corrected to 'Count'|
-| Total Requests | TotalRequests| 'Max' aggregation type is corrected to 'Count'|
-| Mongo Count Request Charge| MongoRequestCharge with dimension "CommandName" = "count"||
-| Mongo Count Request Rate | MongoRequestsCount with dimension "CommandName" = "count"||
-| Mongo Delete Request Charge | MongoRequestCharge with dimension "CommandName" = "delete"||
-| Mongo Delete Request Rate | MongoRequestsCount with dimension "CommandName" = "delete"||
-| Mongo Insert Request Charge | MongoRequestCharge with dimension "CommandName" = "insert"||
-| Mongo Insert Request Rate | MongoRequestsCount with dimension "CommandName" = "insert"||
-| Mongo Query Request Charge | MongoRequestCharge with dimension "CommandName" = "find"||
-| Mongo Query Request Rate | MongoRequestsCount with dimension "CommandName" = "find"||
-| Mongo Update Request Charge | MongoRequestCharge with dimension "CommandName" = "update"||
-| Mongo Insert Failed Requests | MongoRequestCount with dimensions "CommandName" = "insert" and "Status" = "failed"| 'Average' aggregation type is corrected to 'Count'|
-| Mongo Query Failed Requests | MongoRequestCount with dimensions "CommandName" = "query" and "Status" = "failed"| 'Average' aggregation type is corrected to 'Count'|
-| Mongo Count Failed Requests | MongoRequestCount with dimensions "CommandName" = "count" and "Status" = "failed"| 'Average' aggregation type is corrected to 'Count'|
-| Mongo Update Failed Requests | MongoRequestCount with dimensions "CommandName" = "update" and "Status" = "failed"| 'Average' aggregation type is corrected to 'Count'|
-| Mongo Other Failed Requests | MongoRequestCount with dimensions "CommandName" = "other" and "Status" = "failed"| 'Average' aggregation type is corrected to 'Count'|
-| Mongo Delete Failed Requests | MongoRequestCount with dimensions "CommandName" = "delete" and "Status" = "failed"| 'Average' aggregation type is corrected to 'Count'|
-
-### How equivalent action groups are created
-
-Classic alert rules had email, webhook, logic app, and runbook actions tied to the alert rule itself. New alert rules use action groups that can be reused across multiple alert rules. The migration tool creates single action group for same actions no matter of how many alert rules are using the action. Action groups created by the migration tool use the naming format 'Migrated_AG*'.
-
-> [!NOTE]
-> Classic alerts sent localized emails based on the locale of classic administrator when used to notify classic administrator roles. New alert emails are sent via Action Groups and are only in English.
-
-## Rollout phases
-
-The migration tool is rolling out in phases to customers that use classic alert rules. Subscription owners will receive an email when the subscription is ready to be migrated by using the tool.
-
-> [!NOTE]
-> Because the tool is being rolled out in phases, you might see that some of your subscriptions are not yet ready to be migrated during the early phases.
-
-Most of the subscriptions are currently marked as ready for migration. Only subscriptions that have classic alerts on following resource types are still not ready for migration.
--- Microsoft.classicCompute/domainNames/slots/roles-- Microsoft.insights/components-
-## Who can trigger the migration?
-
-Any user who has the built-in role of Monitoring Contributor at the subscription level can trigger the migration. Users who have a custom role with the following permissions can also trigger the migration:
--- */read-- Microsoft.Insights/actiongroups/*-- Microsoft.Insights/AlertRules/*-- Microsoft.Insights/metricAlerts/*-- Microsoft.AlertsManagement/smartDetectorAlertRules/*-
-> [!NOTE]
-> In addition to having above permissions, your subscription should additionally be registered with Microsoft.AlertsManagement resource provider. This is required to successfully migrate Failure Anomaly alerts on Application Insights.
-
-## Common problems and remedies
-
-After you trigger the migration, you'll receive email at the addresses you provided to notify you that migration is complete or if any action is needed from you. This section describes some common problems and how to deal with them.
-
-### Validation failed
-
-Because of some recent changes to classic alert rules in your subscription, the subscription cannot be migrated. This problem is temporary. You can restart the migration after the migration status moves back **Ready for migration** in a few days.
-
-### Scope lock preventing us from migrating your rules
-
-As part of the migration, new metric alerts and new action groups will be created, and then classic alert rules will be deleted. However, a scope lock can prevent us from creating or deleting resources. Depending on the scope lock, some or all rules couldn't be migrated. You can resolve this problem by removing the scope lock for the subscription, resource group, or resource, which is listed in the [migration tool](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/MigrationBladeViewModel), and triggering the migration again. Scope lock can't be disabled and must be removed during the migration process. [Learn more about managing scope locks](../../azure-resource-manager/management/lock-resources.md#portal).
-
-### Policy with 'Deny' effect preventing us from migrating your rules
-
-As part of the migration, new metric alerts and new action groups will be created, and then classic alert rules will be deleted. However, an [Azure Policy](../../governance/policy/index.yml) assignment can prevent us from creating resources. Depending on the policy assignment, some or all rules couldn't be migrated. The policy assignments that are blocking the process are listed in the [migration tool](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/MigrationBladeViewModel). Resolve this problem by either:
--- Excluding the subscriptions, resource groups, or individual resources during the migration process from the policy assignment. [Learn more about managing policy exclusion scopes](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion).-- Set the 'Enforcement Mode' to **Disabled** on the policy assignment. [Learn more about policy assignment's enforcementMode property](../../governance/policy/concepts/assignment-structure.md#enforcement-mode).-- Set an Azure Policy exemption (preview) on the subscriptions, resource groups, or individual resources to the policy assignment. [Learn more about the Azure Policy exemption structure](../../governance/policy/concepts/exemption-structure.md).-- Removing or changing effect to 'disabled', 'audit', 'append', or 'modify' (which, for example, can solve issues relating to missing tags). [Learn more about managing policy effects](../../governance/policy/concepts/definition-structure.md#policy-rule).-
-## Next steps
--- [Prepare for the migration](alerts-prepare-migration.md)
azure-monitor Alerts Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-webhooks.md
- Title: Call a webhook with a classic metric alert in Azure Monitor
-description: Learn how to reroute Azure metric alerts to other, non-Azure systems.
-- Previously updated : 05/28/2023---
-# Call a webhook with a classic metric alert in Azure Monitor
-
-> [!WARNING]
-> This article describes how to use older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Microsoft Azure operated by 21Vianet will retire on **29 February 2024**.
->
-
-You can use webhooks to route an Azure alert notification to other systems for post-processing or custom actions. You can use a webhook on an alert to route it to services that send SMS messages, to log bugs, to notify a team via chat or messaging services, or for various other actions.
-
-This article describes how to set a webhook on an Azure metric alert. It also shows you what the payload for the HTTP POST to a webhook looks like. For information about the setup and schema for an Azure activity log alert (alert on events), see [Call a webhook on an Azure activity log alert](../alerts/alerts-log-webhook.md).
-
-Azure alerts use HTTP POST to send the alert contents in JSON format to a webhook URI that you provide when you create the alert. The schema is defined later in this article. The URI must be a valid HTTP or HTTPS endpoint. Azure posts one entry per request when an alert is activated.
-
-## Configure webhooks via the Azure portal
-To add or update the webhook URI, in the [Azure portal](https://portal.azure.com/), go to **Create/Update Alerts**.
--
-You can also configure an alert to post to a webhook URI by using [Azure PowerShell cmdlets](../powershell-samples.md#create-metric-alerts), a [cross-platform CLI](../cli-samples.md#work-with-alerts), or [Azure Monitor REST APIs](/rest/api/monitor/alertrules).
-
-## Authenticate the webhook
-The webhook can authenticate by using token-based authorization. The webhook URI is saved with a token ID. For example: `https://mysamplealert/webcallback?tokenid=sometokenid&someparameter=somevalue`
-
-## Payload schema
-The POST operation contains the following JSON payload and schema for all metric-based alerts:
-
-```JSON
-{
- "status": "Activated",
- "context": {
- "timestamp": "2015-08-14T22:26:41.9975398Z",
- "id": "/subscriptions/s1/resourceGroups/useast/providers/microsoft.insights/alertrules/ruleName1",
- "name": "ruleName1",
- "description": "some description",
- "conditionType": "Metric",
- "condition": {
- "metricName": "Requests",
- "metricUnit": "Count",
- "metricValue": "10",
- "threshold": "10",
- "windowSize": "15",
- "timeAggregation": "Average",
- "operator": "GreaterThanOrEqual"
- },
- "subscriptionId": "s1",
- "resourceGroupName": "useast",
- "resourceName": "mysite1",
- "resourceType": "microsoft.foo/sites",
- "resourceId": "/subscriptions/s1/resourceGroups/useast/providers/microsoft.foo/sites/mysite1",
- "resourceRegion": "centralus",
- "portalLink": "https://portal.azure.com/#resource/subscriptions/s1/resourceGroups/useast/providers/microsoft.foo/sites/mysite1"
- },
- "properties": {
- "key1": "value1",
- "key2": "value2"
- }
-}
-```
--
-| Field | Mandatory | Fixed set of values | Notes |
-|: |: |: |: |
-| status |Y |Activated, Resolved |The status for the alert based on the conditions you set. |
-| context |Y | |The alert context. |
-| timestamp |Y | |The time at which the alert was triggered. |
-| id |Y | |Every alert rule has a unique ID. |
-| name |Y | |The alert name. |
-| description |Y | |A description of the alert. |
-| conditionType |Y |Metric, Event |Two types of alerts are supported: metric and event. Metric alerts are based on a metric condition. Event alerts are based on an event in the activity log. Use this value to check whether the alert is based on a metric or on an event. |
-| condition |Y | |The specific fields to check based on the **conditionType** value. |
-| metricName |For metric alerts | |The name of the metric that defines what the rule monitors. |
-| metricUnit |For metric alerts |Bytes, BytesPerSecond, Count, CountPerSecond, Percent, Seconds |The unit allowed in the metric. See [allowed values](/previous-versions/azure/reference/dn802430(v=azure.100)). |
-| metricValue |For metric alerts | |The actual value of the metric that caused the alert. |
-| threshold |For metric alerts | |The threshold value at which the alert is activated. |
-| windowSize |For metric alerts | |The period of time that's used to monitor alert activity based on the threshold. The value must be between 5 minutes and 1 day. The value must be in ISO 8601 duration format. |
-| timeAggregation |For metric alerts |Average, Last, Maximum, Minimum, None, Total |How the data that's collected should be combined over time. The default value is Average. See [allowed values](/previous-versions/azure/reference/dn802410(v=azure.100)). |
-| operator |For metric alerts | |The operator that's used to compare the current metric data to the set threshold. |
-| subscriptionId |Y | |The Azure subscription ID. |
-| resourceGroupName |Y | |The name of the resource group for the affected resource. |
-| resourceName |Y | |The resource name of the affected resource. |
-| resourceType |Y | |The resource type of the affected resource. |
-| resourceId |Y | |The resource ID of the affected resource. |
-| resourceRegion |Y | |The region or location of the affected resource. |
-| portalLink |Y | |A direct link to the portal resource summary page. |
-| properties |N |Optional |A set of key/value pairs that has details about the event. For example, `Dictionary<String, String>`. The properties field is optional. In a custom UI or logic app-based workflow, users can enter key/value pairs that can be passed via the payload. An alternate way to pass custom properties back to the webhook is via the webhook URI itself (as query parameters). |
-
-> [!NOTE]
-> You can set the **properties** field only by using [Azure Monitor REST APIs](/rest/api/monitor/alertrules).
->
->
-
-## Next steps
-* Learn more about Azure alerts and webhooks in the video [Integrate Azure alerts with PagerDuty](https://go.microsoft.com/fwlink/?LinkId=627080).
-* Learn how to [execute Azure Automation scripts (runbooks) on Azure alerts](https://go.microsoft.com/fwlink/?LinkId=627081).
-* Learn how to [use a logic app to send an SMS message via Twilio from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-text-message-with-logic-app).
-* Learn how to [use a logic app to send a Slack message from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-slack-with-logic-app).
-* Learn how to [use a logic app to send a message to an Azure Queue from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-queue-with-logic-app).
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md
- Title: Legacy Log Analytics Alert REST API
-description: The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and examples for performing different operations.
-- Previously updated : 06/20/2023---
-# Legacy Log Analytics Alert REST API
-
-This article describes how to manage alert rules using the legacy API.
-
-> [!IMPORTANT]
-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics Alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log search alerts by that date.
-> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits).
-
-The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and several examples for performing different operations.
-
-The Log Analytics Search REST API is RESTful and can be accessed via the Azure Resource Manager REST API. In this article, you'll find examples where the API is accessed from a PowerShell command line by using [ARMClient](https://github.com/projectkudu/ARMClient). This open-source command-line tool simplifies invoking the Azure Resource Manager API.
-
-The use of ARMClient and PowerShell is one of many options you can use to access the Log Analytics Search API. With these tools, you can utilize the RESTful Azure Resource Manager API to make calls to Log Analytics workspaces and perform search commands within them. The API outputs search results in JSON format so that you can use the search results in many different ways programmatically.
-
-## Prerequisites
-
-Currently, alerts can only be created with a saved search in Log Analytics. For more information, see the [Log Search REST API](../logs/log-query-overview.md).
-
-## Schedules
-
-A saved search can have one or more schedules. The schedule defines how often the search is run and the time interval over which the criteria are identified. Schedules have the properties described in the following table:
-
-| Property | Description |
-|: |: |
-| `Interval` |How often the search is run. Measured in minutes. |
-| `QueryTimeSpan` |The time interval over which the criteria are evaluated. Must be equal to or greater than `Interval`. Measured in minutes. |
-| `Version` |The API version being used. Currently, this setting should always be `1`. |
-
-For example, consider an event query with an `Interval` of 15 minutes and a `Timespan` of 30 minutes. In this case, the query would be run every 15 minutes. An alert would be triggered if the criteria continued to resolve to `true` over a 30-minute span.
-
-### Retrieve schedules
-
-Use the Get method to retrieve all schedules for a saved search.
-
-```powershell
-armclient get /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules?api-version=2015-03-20
-```
-
-Use the Get method with a schedule ID to retrieve a particular schedule for a saved search.
-
-```powershell
-armclient get /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}?api-version=2015-03-20
-```
-
-The following sample response is for a schedule:
-
-```json
-{
- "value": [{
- "id": "subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/sampleRG/providers/Microsoft.OperationalInsights/workspaces/MyWorkspace/savedSearches/0f0f4853-17f8-4ed1-9a03-8e888b0d16ec/schedules/a17b53ef-bd70-4ca4-9ead-83b00f2024a8",
- "etag": "W/\"datetime'2016-02-25T20%3A54%3A49.8074679Z'\"",
- "properties": {
- "Interval": 15,
- "QueryTimeSpan": 15,
- "Enabled": true,
- }
- }]
-}
-```
-
-### Create a schedule
-
-Use the Put method with a unique schedule ID to create a new schedule. Two schedules can't have the same ID even if they're associated with different saved searches. When you create a schedule in the Log Analytics console, a GUID is created for the schedule ID.
-
-> [!NOTE]
-> The name for all saved searches, schedules, and actions created with the Log Analytics API must be in lowercase.
-
-```powershell
-$scheduleJson = "{'properties': { 'Interval': 15, 'QueryTimeSpan':15, 'Enabled':'true' } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/mynewschedule?api-version=2015-03-20 $scheduleJson
-```
-
-### Edit a schedule
-
-Use the Put method with an existing schedule ID for the same saved search to modify that schedule. In the following example, the schedule is disabled. The body of the request must include the *etag* of the schedule.
-
-```powershell
-$scheduleJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A49.8074679Z'\""','properties': { 'Interval': 15, 'QueryTimeSpan':15, 'Enabled':'false' } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/mynewschedule?api-version=2015-03-20 $scheduleJson
-```
-
-### Delete schedules
-
-Use the Delete method with a schedule ID to delete a schedule.
-
-```powershell
-armclient delete /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}?api-version=2015-03-20
-```
-
-## Actions
-
-A schedule can have multiple actions. An action might define one or more processes to perform, such as sending an email or starting a runbook. An action also might define a threshold that determines when the results of a search match some criteria. Some actions will define both so that the processes are performed when the threshold is met.
-
-All actions have the properties described in the following table. Different types of alerts have other different properties, which are described in the following table:
-
-| Property | Description |
-|: |: |
-| `Type` |Type of the action. Currently, the possible values are `Alert` and `Webhook`. |
-| `Name` |Display name for the alert. |
-| `Version` |The API version being used. Currently, this setting should always be `1`. |
-
-### Retrieve actions
-
-Use the Get method to retrieve all actions for a schedule.
-
-```powershell
-armclient get /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions?api-version=2015-03-20
-```
-
-Use the Get method with the action ID to retrieve a particular action for a schedule.
-
-```powershell
-armclient get /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}/actions/{Action ID}?api-version=2015-03-20
-```
-
-### Create or edit actions
-
-Use the Put method with an action ID that's unique to the schedule to create a new action. When you create an action in the Log Analytics console, a GUID is for the action ID.
-
-> [!NOTE]
-> The name for all saved searches, schedules, and actions created with the Log Analytics API must be in lowercase.
-
-Use the Put method with an existing action ID for the same saved search to modify that schedule. The body of the request must include the etag of the schedule.
-
-The request format for creating a new action varies by action type, so these examples are provided in the following sections.
-
-### Delete actions
-
-Use the Delete method with the action ID to delete an action.
-
-```powershell
-armclient delete /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Subscription ID}/schedules/{Schedule ID}/Actions/{Action ID}?api-version=2015-03-20
-```
-
-### Alert actions
-
-A schedule should have one and only one Alert action. Alert actions have one or more of the sections described in the following table:
-
-| Section | Description | Usage |
-|: |: |: |
-| Threshold |Criteria for when the action is run.| Required for every alert, before or after they're extended to Azure. |
-| Severity |Label used to classify the alert when triggered.| Required for every alert, before or after they're extended to Azure. |
-| Suppress |Option to stop notifications from alerts. | Optional for every alert, before or after they're extended to Azure. |
-| Action groups |IDs of Azure `ActionGroup` where actions required are specified, like emails, SMSs, voice calls, webhooks, automation runbooks, and ITSM Connectors.| Required after alerts are extended to Azure.|
-| Customize actions|Modify the standard output for select actions from `ActionGroup`.| Optional for every alert and can be used after alerts are extended to Azure. |
-
-### Thresholds
-
-An Alert action should have one and only one threshold. When the results of a saved search match the threshold in an action associated with that search, any other processes in that action are run. An action can also contain only a threshold so that it can be used with actions of other types that don't contain thresholds.
-
-Thresholds have the properties described in the following table:
-
-| Property | Description |
-|: |: |
-| `Operator` |Operator for the threshold comparison. <br> gt = Greater than <br> lt = Less than |
-| `Value` |Value for the threshold. |
-
-For example, consider an event query with an `Interval` of 15 minutes, a `Timespan` of 30 minutes, and a `Threshold` of greater than 10. In this case, the query would be run every 15 minutes. An alert would be triggered if it returned 10 events that were created over a 30-minute span.
-
-The following sample response is for an action with only a `Threshold`:
-
-```json
-"etag": "W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"",
-"properties": {
- "Type": "Alert",
- "Name": "My threshold action",
- "Threshold": {
- "Operator": "gt",
- "Value": 10
- },
- "Version": 1
-}
-```
-
-Use the Put method with a unique action ID to create a new threshold action for a schedule.
-
-```powershell
-$thresholdJson = "{'properties': { 'Name': 'My Threshold', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/mythreshold?api-version=2015-03-20 $thresholdJson
-```
-
-Use the Put method with an existing action ID to modify a threshold action for a schedule. The body of the request must include the etag of the action.
-
-```powershell
-$thresholdJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"','properties': { 'Name': 'My Threshold', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/mythreshold?api-version=2015-03-20 $thresholdJson
-```
-
-#### Severity
-
-Log Analytics allows you to classify your alerts into categories for easier management and triage. The Alerts severity levels are `informational`, `warning`, and `critical`. These categories are mapped to the normalized severity scale of Azure Alerts as shown in the following table:
-
-|Log Analytics severity level |Azure Alerts severity level |
-|||
-|`critical` |Sev 0|
-|`warning` |Sev 1|
-|`informational` | Sev 2|
-
-The following sample response is for an action with only `Threshold` and `Severity`:
-
-```json
-"etag": "W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"",
-"properties": {
- "Type": "Alert",
- "Name": "My threshold action",
- "Threshold": {
- "Operator": "gt",
- "Value": 10
- },
- "Severity": "critical",
- "Version": 1
-}
-```
-
-Use the Put method with a unique action ID to create a new action for a schedule with `Severity`.
-
-```powershell
-$thresholdWithSevJson = "{'properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/mythreshold?api-version=2015-03-20 $thresholdWithSevJson
-```
-
-Use the Put method with an existing action ID to modify a severity action for a schedule. The body of the request must include the etag of the action.
-
-```powershell
-$thresholdWithSevJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"','properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/mythreshold?api-version=2015-03-20 $thresholdWithSevJson
-```
-
-#### Suppress
-
-Log Analytics-based query alerts fire every time the threshold is met or exceeded. Based on the logic implied in the query, an alert might get fired for a series of intervals. The result is that notifications are sent constantly. To prevent such a scenario, you can set the `Suppress` option that instructs Log Analytics to wait for a stipulated amount of time before notification is fired the second time for the alert rule.
-
-For example, if `Suppress` is set for 30 minutes, the alert will fire the first time and send notifications configured. It will then wait for 30 minutes before notification for the alert rule is again used. In the interim period, the alert rule will continue to run. Only notification is suppressed by Log Analytics for a specified time regardless of how many times the alert rule fired in this period.
-
-The `Suppress` property of a log search alert rule is specified by using the `Throttling` value. The suppression period is specified by using the `DurationInMinutes` value.
-
-The following sample response is for an action with only `Threshold`, `Severity`, and `Suppress` properties.
-
-```json
-"etag": "W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"",
-"properties": {
- "Type": "Alert",
- "Name": "My threshold action",
- "Threshold": {
- "Operator": "gt",
- "Value": 10
- },
- "Throttling": {
- "DurationInMinutes": 30
- },
- "Severity": "critical",
- "Version": 1
-}
-```
-
-Use the Put method with a unique action ID to create a new action for a schedule with `Severity`.
-
-```powershell
-$AlertSuppressJson = "{'properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Throttling': { 'DurationInMinutes': 30 },'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myalert?api-version=2015-03-20 $AlertSuppressJson
-```
-
-Use the Put method with an existing action ID to modify a severity action for a schedule. The body of the request must include the etag of the action.
-
-```powershell
-$AlertSuppressJson = "{'etag': 'W/\"datetime'2016-02-25T20%3A54%3A20.1302566Z'\"','properties': { 'Name': 'My Threshold', 'Version':'1','Severity': 'critical', 'Type':'Alert', 'Throttling': { 'DurationInMinutes': 30 },'Threshold': { 'Operator': 'gt', 'Value': 10 } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myalert?api-version=2015-03-20 $AlertSuppressJson
-```
-
-#### Action groups
-
-All alerts in Azure use action group as the default mechanism for handling actions. With an action group, you can specify your actions once and then associate the action group to multiple alerts across Azure without the need to declare the same actions repeatedly. Action groups support multiple actions like email, SMS, voice call, ITSM connection, automation runbook, and webhook URI.
-
-For users who have extended their alerts into Azure, a schedule should now have action group details passed along with `Threshold` to be able to create an alert. E-mail details, webhook URLs, runbook automation details, and other actions need to be defined inside an action group first before you create an alert. You can create an [action group from Azure Monitor](./action-groups.md) in the Azure portal or use the [Action Group API](/rest/api/monitor/actiongroups).
-
-To associate an action group to an alert, specify the unique Azure Resource Manager ID of the action group in the alert definition. The following sample illustrates the use:
-
-```json
-"etag": "W/\"datetime'2017-12-13T10%3A52%3A21.1697364Z'\"",
-"properties": {
- "Type": "Alert",
- "Name": "test-alert",
- "Description": "I need to put a description here",
- "Threshold": {
- "Operator": "gt",
- "Value": 12
- },
- "AzNsNotification": {
- "GroupIds": [
- "/subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup"
- ]
- },
- "Severity": "critical",
- "Version": 1
-}
-```
-
-Use the Put method with a unique action ID to associate an already existing action group for a schedule. The following sample illustrates the use:
-
-```powershell
-$AzNsJson = "{'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup']} } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson
-```
-
-Use the Put method with an existing action ID to modify an action group associated for a schedule. The body of the request must include the etag of the action.
-
-```powershell
-$AzNsJson = "{'etag': 'datetime'2017-12-13T10%3A52%3A21.1697364Z'\"', 'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': { 'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup'] } } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson
-```
-
-#### Customize actions
-
-By default, actions follow standard templates and format for notifications. But you can customize some actions, even if they're controlled by action groups. Currently, customization is possible for `EmailSubject` and `WebhookPayload`.
-
-##### Customize EmailSubject for an action group
-
-By default, the email subject for alerts is Alert Notification `<AlertName>` for `<WorkspaceName>`. But the subject can be customized so that you can specify words or tags to allow you to easily employ filter rules in your Inbox. The customized email header details need to be sent along with `ActionGroup` details, as in the following sample:
-
-```json
-"etag": "W/\"datetime'2017-12-13T10%3A52%3A21.1697364Z'\"",
-"properties": {
- "Type": "Alert",
- "Name": "test-alert",
- "Description": "I need to put a description here",
- "Threshold": {
- "Operator": "gt",
- "Value": 12
- },
- "AzNsNotification": {
- "GroupIds": [
- "/subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup"
- ],
- "CustomEmailSubject": "Azure Alert fired"
- },
- "Severity": "critical",
- "Version": 1
-}
-```
-
-Use the Put method with a unique action ID to associate an existing action group with customization for a schedule. The following sample illustrates the use:
-
-```powershell
-$AzNsJson = "{'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup'], 'CustomEmailSubject': 'Azure Alert fired'} } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson
-```
-
-Use the Put method with an existing action ID to modify an action group associated for a schedule. The body of the request must include the etag of the action.
-
-```powershell
-$AzNsJson = "{'etag': 'datetime'2017-12-13T10%3A52%3A21.1697364Z'\"', 'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup']}, 'CustomEmailSubject': 'Azure Alert fired' } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson
-```
-
-##### Customize WebhookPayload for an action group
-
-By default, the webhook sent via an action group for Log Analytics has a fixed structure. But you can customize the JSON payload by using specific variables supported to meet requirements of the webhook endpoint. For more information, see [Webhook action for log search alert rules](./alerts-log-webhook.md).
-
-The customized webhook details must be sent along with `ActionGroup` details. They'll be applied to all webhook URIs specified inside the action group. The following sample illustrates the use:
-
-```json
-"etag": "W/\"datetime'2017-12-13T10%3A52%3A21.1697364Z'\"",
-"properties": {
- "Type": "Alert",
- "Name": "test-alert",
- "Description": "I need to put a description here",
- "Threshold": {
- "Operator": "gt",
- "Value": 12
- },
- "AzNsNotification": {
- "GroupIds": [
- "/subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup"
- ],
- "CustomWebhookPayload": "{\"field1\":\"value1\",\"field2\":\"value2\"}",
- "CustomEmailSubject": "Azure Alert fired"
- },
- "Severity": "critical",
- "Version": 1
-},
-```
-
-Use the Put method with a unique action ID to associate an existing action group with customization for a schedule. The following sample illustrates the use:
-
-```powershell
-$AzNsJson = "{'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup'], 'CustomEmailSubject': 'Azure Alert fired','CustomWebhookPayload': '{\"field1\":\"value1\",\"field2\":\"value2\"}'} } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson
-```
-
-Use the Put method with an existing action ID to modify an action group associated for a schedule. The body of the request must include the etag of the action.
-
-```powershell
-$AzNsJson = "{'etag': 'datetime'2017-12-13T10%3A52%3A21.1697364Z'\"', 'properties': { 'Name': 'test-alert', 'Version':'1', 'Type':'Alert', 'Threshold': { 'Operator': 'gt', 'Value': 12 },'Severity': 'critical', 'AzNsNotification': {'GroupIds': ['subscriptions/1234a45-123d-4321-12aa-123b12a5678/resourcegroups/my-resource-group/providers/microsoft.insights/actiongroups/test-actiongroup']}, 'CustomEmailSubject': 'Azure Alert fired','CustomWebhookPayload': '{\"field1\":\"value1\",\"field2\":\"value2\"}' } }"
-armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Name}/Microsoft.OperationalInsights/workspaces/{Workspace Name}/savedSearches/{Search ID}/schedules/{Schedule ID}/actions/myAzNsaction?api-version=2015-03-20 $AzNsJson
-```
-
-## Next steps
-
-* Use the [REST API to perform log searches](../logs/log-query-overview.md) in Log Analytics.
-* Learn about [log search alerts in Azure Monitor](./alerts-types.md#log-alerts).
-* Learn how to [create, edit, or manage log search alert rules in Azure Monitor](./alerts-log.md).
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Microsoft Entra authentication for Application Insights description: Learn how to enable Microsoft Entra authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Previously updated : 11/15/2023 Last updated : 04/01/2024 ms.devlang: csharp
-# ms.devlang: csharp, java, javascript, python
The following preliminary steps are required to enable Microsoft Entra authentic
- [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md). - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md). - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).-- Have an Owner role to the resource group to grant access by using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+- Have an Owner role to the resource group if you want to grant access by using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
- Understand the [unsupported scenarios](#unsupported-scenarios). ## Unsupported scenarios
-The following SDKs and features are unsupported for use with Microsoft Entra authenticated ingestion:
+The following Software Development Kits (SDKs) and features are unsupported for use with Microsoft Entra authenticated ingestion:
- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br /> Microsoft Entra authentication is only available for Application Insights Java Agent greater than or equal to 3.2.0. - [ApplicationInsights JavaScript web SDK](javascript.md). - [Application Insights OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) with Python version 3.4 and 3.5.-- [Certificate/secret-based Microsoft Entra ID](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use managed identities instead. - On-by-default [autoinstrumentation/codeless monitoring](codeless-overview.md) (for languages) for Azure App Service, Azure Virtual Machines/Azure Virtual Machine Scale Sets, and Azure Functions. - [Profiler](profiler-overview.md).
Application Insights .NET SDK supports the credential classes provided by [Azure
- We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities. - For system-assigned, use the default constructor without parameters. - For user-assigned, provide the client ID to the constructor.-- We recommend `ClientSecretCredential` for service principals.
- - Provide the tenant ID, client ID, and client secret to the constructor.
The following example shows how to manually create and configure `TelemetryConfiguration` by using .NET:
appInsights.defaultClient.config.aadTokenCredential = credential;
```
-#### ClientSecretCredential
-
-```javascript
-import appInsights from "applicationinsights";
-import { ClientSecretCredential } from "@azure/identity";
-
-const credential = new ClientSecretCredential(
- "<YOUR_TENANT_ID>",
- "<YOUR_CLIENT_ID>",
- "<YOUR_CLIENT_SECRET>"
- );
-appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/").start();
-appInsights.defaultClient.config.aadTokenCredential = credential;
-
-```
- ### [Java](#tab/java) > [!NOTE]
The following example shows how to configure the Java agent to use user-assigned
:::image type="content" source="media/azure-ad-authentication/user-assigned-managed-identity.png" alt-text="Screenshot that shows user-assigned managed identity." lightbox="media/azure-ad-authentication/user-assigned-managed-identity.png":::
-#### Client secret
-
-The following example shows how to configure the Java agent to use a service principal for authentication with Microsoft Entra ID. We recommend using this type of authentication only during development. The ultimate goal of adding the authentication feature is to eliminate secrets.
-
-```JSON
-{
- "connectionString": "App Insights Connection String with IngestionEndpoint",
- "authentication": {
- "enabled": true,
- "type": "CLIENTSECRET",
- "clientId":"<YOUR CLIENT ID>",
- "clientSecret":"<YOUR CLIENT SECRET>",
- "tenantId":"<YOUR TENANT ID>"
- }
-}
-```
--- #### Environment variable configuration The `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable lets Application Insights authenticate to Microsoft Entra ID and send telemetry.
tracer = Tracer(
```
-#### Client secret
-
-```python
-from azure.identity import ClientSecretCredential
-
-from opencensus.ext.azure.trace_exporter import AzureExporter
-from opencensus.trace.samplers import ProbabilitySampler
-from opencensus.trace.tracer import Tracer
-
-tenant_id = "<tenant-id>"
-client_id = "<client-id"
-client_secret = "<client-secret>"
-
-credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
-tracer = Tracer(
- exporter=AzureExporter(credential=credential, connection_string="InstrumentationKey=<your-instrumentation-key>;IngestionEndpoint=<your-ingestion-endpoint>"),
- sampler=ProbabilitySampler(1.0)
-)
-...
-```
- ## Disable local authentication
You can disable local authentication by using the Azure portal or Azure Policy o
:::image type="content" source="./media/azure-ad-authentication/disable.png" alt-text="Screenshot that shows local authentication with the Enabled/Disabled button.":::
-1. After your resource has disabled local authentication, you'll see the corresponding information in the **Overview** pane.
+1. After disabling local authentication on your resource, you'll see the corresponding information in the **Overview** pane.
:::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot that shows the Overview tab with the Disabled (select to change) local authentication button.":::
If you're using sovereign clouds, you can find the audience information in the c
*InstrumentationKey={profile.InstrumentationKey};IngestionEndpoint={ingestionEndpoint};LiveEndpoint={liveDiagnosticsEndpoint};AADAudience={aadAudience}*
-The audience parameter, AADAudience, may vary depending on your specific environment.
+The audience parameter, AADAudience, can vary depending on your specific environment.
## Troubleshooting
The ingestion service returns specific errors, regardless of the SDK language. N
#### HTTP/1.1 400 Authentication not supported
-This error indicates that the resource is configured for Microsoft Entra-only. The SDK hasn't been correctly configured and is sending to the incorrect API.
+This error shows the resource is set for Microsoft Entra-only. You need to correctly configure the SDK because it's sending to the wrong API.
> [!NOTE] > "v2/track" doesn't support Microsoft Entra ID. When the SDK is correctly configured, telemetry will be sent to "v2.1/track".
Next, you should identify exceptions in the SDK logs or network errors from Azur
#### HTTP/1.1 403 Unauthorized
-This error indicates that the SDK is configured with credentials that haven't been given permission to the Application Insights resource or subscription.
+This error means the SDK uses credentials without permission for the Application Insights resource or subscription.
-Next, you should review the Application Insights resource's access control. The SDK must be configured with a credential that's been granted the Monitoring Metrics Publisher role.
+First, check the Application Insights resource's access control. You must configure the SDK with credentials that have the Monitoring Metrics Publisher role.
### Language-specific troubleshooting
You can inspect network traffic by using a tool like Fiddler. To enable the traf
} ```
-Or add the following JVM args while running your application: `-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
+Or add the following Java Virtual Machine (JVM) args while running your application: `-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
If Microsoft Entra ID is enabled in the agent, outbound traffic includes the HTTP header `Authorization`. #### 401 Unauthorized
-If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 401, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. You probably haven't enabled Microsoft Entra authentication on the agent, but your Application Insights resource is configured with `DisableLocalAuth: true`. Make sure you're passing in a valid credential and that it has permission to access your Application Insights resource.
+If you see the message, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 401, please check your credentials` in the log, it means the agent couldn't send telemetry. You likely didn't enable Microsoft Entra authentication on the agent, while your Application Insights resource has `DisableLocalAuth: true`. Ensure you pass a valid credential with access permission to your Application Insights resource.
If you're using Fiddler, you might see the response header `HTTP/1.1 401 Unauthorized - please provide the valid authorization token`. #### CredentialUnavailableException
-If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid client ID in your User-Assigned Managed Identity configuration.
+If you see the exception, `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established` in the log file, it means the agent failed to acquire the access token. The likely cause is an invalid client ID in your User-Assigned Managed Identity configuration.
#### Failed to send telemetry
-If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This warning might be because the provided credentials don't grant access to ingest the telemetry into the component
-
-If you're using Fiddler, you might see the response header `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
-
-The root cause might be one of the following reasons:
--- You've created the resource with a system-assigned managed identity or associated a user-assigned identity with it. However, you might have forgotten to add the Monitoring Metrics Publisher role to the resource (if using SAMI) or the user-assigned identity (if using UAMI).-- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (VM or app service) or user-assigned identity with Monitoring Metrics Publisher roles in your Application Insights resource.-
-#### Invalid Tenant ID
+If you see the message, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials` in the log, it means the agent couldn't send telemetry. The likely reason is that the credentials used don't allow telemetry ingestion.
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid or the wrong `tenantId` in your client secret configuration.
+Using Fiddler, you might notice the response `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
-#### Invalid client secret
+The issue could be due to:
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid client secret in your client secret configuration.
+- Creating the resource with a system-assigned managed identity or associating a user-assigned identity without adding the Monitoring Metrics Publisher role to it.
+- Using the correct credentials for access tokens but linking them to the wrong Application Insights resource. Ensure your resource (virtual machine or app service) or user-assigned identity has Monitoring Metrics Publisher roles in your Application Insights resource.
#### Invalid Client ID
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid or the wrong client ID in your client secret configuration
+If the exception, `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory` in the log, it means the agent failed to get the access token. This exception likely happens because the client ID in your client secret configuration is invalid or incorrect.
- If the administrator hasn't installed the application or no user in the tenant has consented to it, this scenario occurs. You may have sent your authentication request to the wrong tenant.
+This issue occurs if the administrator doesn't install the application or no tenant user consents to it. It also happens if you send your authentication request to the wrong tenant.
### [Python](#tab/python)
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights description: Application performance monitoring for Azure virtual machines and virtual machine scale sets. Previously updated : 03/22/2023 Last updated : 04/05/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, python
We recommend the [Application Insights Java 3.0 agent](./opentelemetry-enable.md
### [Node.js](#tab/nodejs)
-To instrument your Node.js application, use the [SDK](./nodejs.md).
+To instrument your Node.js application, use the [OpenTelemetry Distro](./opentelemetry-enable.md).
### [Python](#tab/python)
-To monitor Python apps, use the [SDK](/previous-versions/azure/azure-monitor/app/opencensus-python).
+To monitor Python apps, use the [OpenTelemetry Distro](./opentelemetry-enable.md).
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Title: Monitor Azure app services performance ASP.NET | Microsoft Docs description: Learn about application performance monitoring for Azure app services by using ASP.NET. Chart load and response time and dependency information, and set alerts on performance. Previously updated : 03/22/2023 Last updated : 04/05/2024 ms.devlang: javascript
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You can also set the sampling percentage by using the environment variable `APPL
> [!NOTE] > For the sampling percentage, choose a percentage that's close to 100/N, where N is an integer. Currently, sampling doesn't support other values.
-## Sampling overrides (preview)
-
-This feature is in preview, starting from 3.0.3.
+## Sampling overrides
Sampling overrides allow you to override the [default sampling percentage](#sampling). For example, you can:
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Or using [inherited attributes](./java-standalone-config.md#inherited-attribute-
2.x SDK TelemetryProcessors don't run when using the 3.x agent. Many of the use cases that previously required writing a `TelemetryProcessor` can be solved in Application Insights Java 3.x
-by configuring [sampling overrides](./java-standalone-config.md#sampling-overrides-preview).
+by configuring [sampling overrides](./java-standalone-config.md#sampling-overrides).
## Multiple applications in a single JVM
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
You might use the following ways to filter out telemetry before it leaves your a
### [Java](#tab/java)
-See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md).
+See [sampling overrides](java-standalone-config.md#sampling-overrides) and [telemetry processors](java-standalone-telemetry-processors.md).
### [Node.js](#tab/nodejs)
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
- Title: Monitoring Azure monitor data reference
-description: Important reference material needed when you monitor parts of Azure Monitor
----- Previously updated : 04/03/2022---
-# Monitoring Azure Monitor data reference
-
-> [!NOTE]
-> This article may seem confusing because it lists the parts of the Azure Monitor service that are monitored by itself.
-
-See [Monitoring Azure Monitor](monitor-azure-monitor.md) for an explanation of how Azure Monitor monitors itself.
-
-## Metrics
-
-This section lists all the platform metrics collected automatically for Azure Monitor into Azure Monitor.
-
-|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| [Autoscale behaviors for VMs and AppService](./autoscale/autoscale-overview.md) | [microsoft.insights/autoscalesettings](/azure/azure-monitor/platform/metrics-supported#microsoftinsightsautoscalesettings) |
-
-While technically not about Azure Monitor operations, the following metrics are collected into Azure Monitor namespaces.
-
-|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Log Analytics agent gathered data for the [Metric alerts on logs](./alerts/alerts-metric-logs.md#metrics-and-dimensions-supported-for-logs) feature | [Microsoft.OperationalInsights/workspaces](/azure/azure-monitor/platform/metrics-supported##microsoftoperationalinsightsworkspaces)
-| [Application Insights availability tests](./app/availability-overview.md) | [Microsoft.Insights/Components](./essentials/metrics-supported.md#microsoftinsightscomponents)
-
-See a complete list of [platform metrics for other resources types](/azure/azure-monitor/platform/metrics-supported).
-
-## Metric Dimensions
-
-For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
-
-The following dimensions are relevant for the following areas of Azure Monitor.
-
-### Autoscale
-
-| Dimension Name | Description |
-| - | -- |
-|MetricTriggerRule | The autoscale rule that triggered the scale action |
-|MetricTriggerSource | The metric value that triggered the scale action |
-|ScaleDirection | The direction of the scale action (up or down)
-
-## Resource logs
-
-This section lists all the Azure Monitor resource log category types collected.
-
-|Resource Log Type | Resource Provider / Type Namespace<br/> and link |
-|-|--|
-| [Autoscale for VMs and AppService](./autoscale/autoscale-overview.md) | [Microsoft.insights/autoscalesettings](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings)|
-| [Application Insights availability tests](./app/availability-overview.md) | [Microsoft.insights/Components](./essentials/resource-logs-categories.md#microsoftinsightscomponents) |
-
-For additional reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
--
-## Azure Monitor Logs tables
-
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Monitor resource types and available for query by Log Analytics.
-
-|Resource Type | Notes |
-|--|-|
-| [Autoscale for VMs and AppService](./autoscale/autoscale-overview.md) | [Autoscale Tables](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-monitor-autoscale-settings) |
--
-## Activity log
-
-For a partial list of entires that the Azure Monitor services writes to the activity log, see [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md#monitor). There may be other entires not listed here.
-
-For more information on the schema of Activity Log entries, see [Activity Log schema](./essentials/activity-log-schema.md).
-
-## Schemas
-
-The following schemas are in use by Azure Monitor.
-
-### Action Groups
-
-The following schemas are relevant to action groups, which are part of the notification infrastructure for Azure Monitor. Following are example calls and responses for action groups.
-
-#### Create Action Group
-```json
-{
- "authorization": {
- "action": "microsoft.insights/actionGroups/write",
- "scope": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc"
- },
- "caller": "test.cam@ieee.org",
- "channels": "Operation",
- "claims": {
- "aud": "https://management.core.windows.net/",
- "iss": "https://sts.windows.net/04ebb17f-c9d2-bbbb-881f-8fd503332aac/",
- "iat": "1627074914",
- "nbf": "1627074914",
- "exp": "1627078814",
- "http://schemas.microsoft.com/claims/authnclassreference": "1",
- "aio": "AUQAu/8TbbbbyZJhgackCVdLETN5UafFt95J8/bC1SP+tBFMusYZ3Z4PBQRZUZ4SmEkWlDevT4p7Wtr4e/R+uksbfixGGQumxw==",
- "altsecid": "1:live.com:00037FFE809E290F",
- "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
- "appid": "c44b4083-3bb0-49c1-bbbb-974e53cbdf3c",
- "appidacr": "2",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "test.cam@ieee.org",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "cam",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "test",
- "groups": "d734c6d5-bbbb-4b39-8992-88fd979076eb",
- "http://schemas.microsoft.com/identity/claims/identityprovider": "live.com",
- "ipaddr": "73.254.xxx.xx",
- "name": "test cam",
- "http://schemas.microsoft.com/identity/claims/objectidentifier": "f19e58c4-5bfa-4ac6-8e75-9823bbb1ea0a",
- "puid": "1003000086500F96",
- "rh": "0.AVgAf7HrBNLJbkKIH4_VAzMqrINAS8SwO8FJtH2XTlPL3zxYAFQ.",
- "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "SzEgbtESOKM8YsOx9t49Ds-L2yCyUR-hpIDinBsS-hk",
- "http://schemas.microsoft.com/identity/claims/tenantid": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "live.com#test.cam@ieee.org",
- "uti": "KuRF5PX4qkyvxJQOXwZ2AA",
- "ver": "1.0",
- "wids": "62e90394-bbbb-4237-9190-012177145e10",
- "xms_tcdt": "1373393473"
- },
- "correlationId": "74d253d8-bd5a-4e8d-a38e-5a52b173b7bd",
- "description": "",
- "eventDataId": "0e9bc114-dcdb-4d2d-b1ea-d3f45a4d32ea",
- "eventName": {
- "value": "EndRequest",
- "localizedValue": "End request"
- },
- "category": {
- "value": "Administrative",
- "localizedValue": "Administrative"
- },
- "eventTimestamp": "2021-07-23T21:21:22.9871449Z",
- "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/0e9bc114-dcdb-4d2d-b1ea-d3f45a4d32ea/ticks/637626720829871449",
- "level": "Informational",
- "operationId": "74d253d8-bd5a-4e8d-a38e-5a52b173b7bd",
- "operationName": {
- "value": "microsoft.insights/actionGroups/write",
- "localizedValue": "Create or update action group"
- },
- "resourceGroupName": "testK-TEST",
- "resourceProviderName": {
- "value": "microsoft.insights",
- "localizedValue": "Microsoft Insights"
- },
- "resourceType": {
- "value": "microsoft.insights/actionGroups",
- "localizedValue": "microsoft.insights/actionGroups"
- },
- "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
- "status": {
- "value": "Succeeded",
- "localizedValue": "Succeeded"
- },
- "subStatus": {
- "value": "Created",
- "localizedValue": "Created (HTTP Status Code: 201)"
- },
- "submissionTimestamp": "2021-07-23T21:22:22.1634251Z",
- "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
- "tenantId": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
- "properties": {
- "statusCode": "Created",
- "serviceRequestId": "33658bb5-fc62-4e40-92e8-8b1f16f649bb",
- "eventCategory": "Administrative",
- "entity": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
- "message": "microsoft.insights/actionGroups/write",
- "hierarchy": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a"
- },
- "relatedEvents": []
-}
-```
-
-#### Delete Action Group
-```json
-{
- "authorization": {
- "action": "microsoft.insights/actionGroups/delete",
- "scope": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc"
- },
- "caller": "test.cam@ieee.org",
- "channels": "Operation",
- "claims": {
- "aud": "https://management.core.windows.net/",
- "iss": "https://sts.windows.net/04ebb17f-c9d2-bbbb-881f-8fd503332aac/",
- "iat": "1627076795",
- "nbf": "1627076795",
- "exp": "1627080695",
- "http://schemas.microsoft.com/claims/authnclassreference": "1",
- "aio": "AUQAu/8TbbbbTkWb9O23RavxIzqfHvA2fJUU/OjdhtHPNAjv0W4pyNnoZ3ShUOEzDut700WhNXth6ZYpd7al4XyJPACEfmtr9g==",
- "altsecid": "1:live.com:00037FFE809E290F",
- "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
- "appid": "c44b4083-3bb0-49c1-bbbb-974e53cbdf3c",
- "appidacr": "2",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "test.cam@ieee.org",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "cam",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "test",
- "groups": "d734c6d5-bbbb-4b39-8992-88fd979076eb",
- "http://schemas.microsoft.com/identity/claims/identityprovider": "live.com",
- "ipaddr": "73.254.xxx.xx",
- "name": "test cam",
- "http://schemas.microsoft.com/identity/claims/objectidentifier": "f19e58c4-5bfa-4ac6-8e75-9823bbb1ea0a",
- "puid": "1003000086500F96",
- "rh": "0.AVgAf7HrBNLJbkKIH4_VAzMqrINAS8SwO8FJtH2XTlPL3zxYAFQ.",
- "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "SzEgbtESOKM8YsOx9t49Ds-L2yCyUR-hpIDinBsS-hk",
- "http://schemas.microsoft.com/identity/claims/tenantid": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "live.com#test.cam@ieee.org",
- "uti": "E1BRdcfDzk64rg0eFx8vAA",
- "ver": "1.0",
- "wids": "62e90394-bbbb-4237-9190-012177145e10",
- "xms_tcdt": "1373393473"
- },
- "correlationId": "a0bd5f9f-d87f-4073-8650-83f03cf11733",
- "description": "",
- "eventDataId": "8c7c920e-6a50-47fe-b264-d762e60cc788",
- "eventName": {
- "value": "EndRequest",
- "localizedValue": "End request"
- },
- "category": {
- "value": "Administrative",
- "localizedValue": "Administrative"
- },
- "eventTimestamp": "2021-07-23T21:52:07.2708782Z",
- "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc/events/8c7c920e-6a50-47fe-b264-d762e60cc788/ticks/637626739272708782",
- "level": "Informational",
- "operationId": "f7cb83ba-36fa-47dd-8ec4-bcac40879241",
- "operationName": {
- "value": "microsoft.insights/actionGroups/delete",
- "localizedValue": "Delete action group"
- },
- "resourceGroupName": "testk-test",
- "resourceProviderName": {
- "value": "microsoft.insights",
- "localizedValue": "Microsoft Insights"
- },
- "resourceType": {
- "value": "microsoft.insights/actionGroups",
- "localizedValue": "microsoft.insights/actionGroups"
- },
- "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc",
- "status": {
- "value": "Succeeded",
- "localizedValue": "Succeeded"
- },
- "subStatus": {
- "value": "OK",
- "localizedValue": "OK (HTTP Status Code: 200)"
- },
- "submissionTimestamp": "2021-07-23T21:54:00.1811815Z",
- "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
- "tenantId": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
- "properties": {
- "statusCode": "OK",
- "serviceRequestId": "88fe5ac8-ee1a-4b97-9d5b-8a3754e256ad",
- "eventCategory": "Administrative",
- "entity": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc",
- "message": "microsoft.insights/actionGroups/delete",
- "hierarchy": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a"
- },
- "relatedEvents": []
-}
-```
-
-#### Unsubscribe using Email
-
-```json
-{
- "caller": "test.cam@ieee.org",
- "channels": "Operation",
- "claims": {
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "person@contoso.com",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn": "",
- "http://schemas.microsoft.com/identity/claims/objectidentifier": ""
- },
- "correlationId": "8f936022-18d0-475f-9704-5151c75e81e4",
- "description": "User with email address:person@contoso.com has unsubscribed from action group:TestingLogginc, Action:testEmail_-EmailAction-",
- "eventDataId": "9b4b7b3f-79a2-4a6a-b1ed-30a1b8907765",
- "eventName": {
- "value": "",
- "localizedValue": ""
- },
- "category": {
- "value": "Administrative",
- "localizedValue": "Administrative"
- },
- "eventTimestamp": "2021-07-23T21:38:35.1687458Z",
- "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/9b4b7b3f-79a2-4a6a-b1ed-30a1b8907765/ticks/637626731151687458",
- "level": "Informational",
- "operationId": "",
- "operationName": {
- "value": "microsoft.insights/actiongroups/write",
- "localizedValue": "Create or update action group"
- },
- "resourceGroupName": "testK-TEST",
- "resourceProviderName": {
- "value": "microsoft.insights",
- "localizedValue": "Microsoft Insights"
- },
- "resourceType": {
- "value": "microsoft.insights/actiongroups",
- "localizedValue": "microsoft.insights/actiongroups"
- },
- "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
- "status": {
- "value": "Succeeded",
- "localizedValue": "Succeeded"
- },
- "subStatus": {
- "value": "Updated",
- "localizedValue": "Updated"
- },
- "submissionTimestamp": "2021-07-23T21:38:35.1687458Z",
- "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
- "tenantId": "",
- "properties": {},
- "relatedEvents": []
-}
-```
-
-#### Unsubscribe using SMS
-```json
-{
- "caller": "",
- "channels": "Operation",
- "claims": {
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "4252137109",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn": "",
- "http://schemas.microsoft.com/identity/claims/objectidentifier": ""
- },
- "correlationId": "e039f06d-c0d1-47ac-b594-89239101c4d0",
- "description": "User with phone number:4255557109 has unsubscribed from action group:TestingLogginc, Action:testPhone_-SMSAction-",
- "eventDataId": "789d0b03-2a2f-40cf-b223-d228abb5d2ed",
- "eventName": {
- "value": "",
- "localizedValue": ""
- },
- "category": {
- "value": "Administrative",
- "localizedValue": "Administrative"
- },
- "eventTimestamp": "2021-07-23T21:31:47.1537759Z",
- "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/789d0b03-2a2f-40cf-b223-d228abb5d2ed/ticks/637626727071537759",
- "level": "Informational",
- "operationId": "",
- "operationName": {
- "value": "microsoft.insights/actiongroups/write",
- "localizedValue": "Create or update action group"
- },
- "resourceGroupName": "testK-TEST",
- "resourceProviderName": {
- "value": "microsoft.insights",
- "localizedValue": "Microsoft Insights"
- },
- "resourceType": {
- "value": "microsoft.insights/actiongroups",
- "localizedValue": "microsoft.insights/actiongroups"
- },
- "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
- "status": {
- "value": "Succeeded",
- "localizedValue": "Succeeded"
- },
- "subStatus": {
- "value": "Updated",
- "localizedValue": "Updated"
- },
- "submissionTimestamp": "2021-07-23T21:31:47.1537759Z",
- "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
- "tenantId": "",
- "properties": {},
- "relatedEvents": []
-}
-```
-
-#### Update Action Group
-```json
-{
- "authorization": {
- "action": "microsoft.insights/actionGroups/write",
- "scope": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc"
- },
- "caller": "test.cam@ieee.org",
- "channels": "Operation",
- "claims": {
- "aud": "https://management.core.windows.net/",
- "iss": "https://sts.windows.net/04ebb17f-c9d2-bbbb-881f-8fd503332aac/",
- "iat": "1627074914",
- "nbf": "1627074914",
- "exp": "1627078814",
- "http://schemas.microsoft.com/claims/authnclassreference": "1",
- "aio": "AUQAu/8TbbbbyZJhgackCVdLETN5UafFt95J8/bC1SP+tBFMusYZ3Z4PBQRZUZ4SmEkWlDevT4p7Wtr4e/R+uksbfixGGQumxw==",
- "altsecid": "1:live.com:00037FFE809E290F",
- "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
- "appid": "c44b4083-3bb0-49c1-bbbb-974e53cbdf3c",
- "appidacr": "2",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "test.cam@ieee.org",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "cam",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "test",
- "groups": "d734c6d5-bbbb-4b39-8992-88fd979076eb",
- "http://schemas.microsoft.com/identity/claims/identityprovider": "live.com",
- "ipaddr": "73.254.xxx.xx",
- "name": "test cam",
- "http://schemas.microsoft.com/identity/claims/objectidentifier": "f19e58c4-5bfa-4ac6-8e75-9823bbb1ea0a",
- "puid": "1003000086500F96",
- "rh": "0.AVgAf7HrBNLJbkKIH4_VAzMqrINAS8SwO8FJtH2XTlPL3zxYAFQ.",
- "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "SzEgbtESOKM8YsOx9t49Ds-L2yCyUR-hpIDinBsS-hk",
- "http://schemas.microsoft.com/identity/claims/tenantid": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "live.com#test.cam@ieee.org",
- "uti": "KuRF5PX4qkyvxJQOXwZ2AA",
- "ver": "1.0",
- "wids": "62e90394-bbbb-4237-9190-012177145e10",
- "xms_tcdt": "1373393473"
- },
- "correlationId": "5a239734-3fbb-4ff7-b029-b0ebf22d3a19",
- "description": "",
- "eventDataId": "62c3ebd8-cfc9-435f-956f-86c45eecbeae",
- "eventName": {
- "value": "BeginRequest",
- "localizedValue": "Begin request"
- },
- "category": {
- "value": "Administrative",
- "localizedValue": "Administrative"
- },
- "eventTimestamp": "2021-07-23T21:24:34.9424246Z",
- "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/62c3ebd8-cfc9-435f-956f-86c45eecbeae/ticks/637626722749424246",
- "level": "Informational",
- "operationId": "5a239734-3fbb-4ff7-b029-b0ebf22d3a19",
- "operationName": {
- "value": "microsoft.insights/actionGroups/write",
- "localizedValue": "Create or update action group"
- },
- "resourceGroupName": "testK-TEST",
- "resourceProviderName": {
- "value": "microsoft.insights",
- "localizedValue": "Microsoft Insights"
- },
- "resourceType": {
- "value": "microsoft.insights/actionGroups",
- "localizedValue": "microsoft.insights/actionGroups"
- },
- "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
- "status": {
- "value": "Started",
- "localizedValue": "Started"
- },
- "subStatus": {
- "value": "",
- "localizedValue": ""
- },
- "submissionTimestamp": "2021-07-23T21:25:22.1522025Z",
- "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
- "tenantId": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
- "properties": {
- "eventCategory": "Administrative",
- "entity": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
- "message": "microsoft.insights/actionGroups/write",
- "hierarchy": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a"
- },
- "relatedEvents": []
-}
-```
-
-## See Also
--- See [Monitoring Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself. -- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
The secret should be created in kube-system namespace and then the configmap/CRD
Below are the details about how to provide the TLS config settings through a configmap or CRD. -- To provide the TLS config setting in a configmap, please create the self-signed certificate and key inside /etc/prometheus/certs directory inside your mtls enabled app.
+- To provide the TLS config setting in a configmap, please create the self-signed certificate and key inside your mtls enabled app.
An example tlsConfig inside the config map should look like this: ```yaml
tls_config:
insecure_skip_verify: false ``` -- To provide the TLS config setting in a CRD, please create the self-signed certificate and key inside /etc/prometheus/certs directory inside your mtls enabled app.
+- To provide the TLS config setting in a CRD, please create the self-signed certificate and key inside your mtls enabled app.
An example tlsConfig inside a Podmonitor should look like this: ```yaml
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
This article describes the different ways that Azure Monitor charges for usage a
[!INCLUDE [azure-monitor-cost-optimization](../../includes/azure-monitor-cost-optimization.md)] ## Pricing model
-Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use. Features of Azure Monitor that are enabled by default do not incur any charge, including collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
+
+Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use. Features of Azure Monitor that are enabled by default don't incur any charge. This includes collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed current pricing for each is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). | Type | Description | |:|:|
-| Logs | Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
-| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
-| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Logs |Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). Log data ingestion will typically be the largest component of Azure Monitor charges for most customers. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
+| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. |
+| Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
-| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log search alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
-| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
+| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log search alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+| Web tests | There's a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests are deprecated.|
A list of Azure Monitor billing meter names is available [here](cost-meters.md).
Sending data to Azure Monitor can incur data bandwidth charges. As described in
> Data sent to a different region using [Diagnostic Settings](essentials/diagnostic-settings.md) does not incur data transfer charges ## View Azure Monitor usage and charges
-There are two primary tools to view, analyze and optimize your Azure Monitor costs. Each is described in detail in the following sections.
+There are two primary tools to view, analyze, and optimize your Azure Monitor costs. Each is described in detail in the following sections.
| Tool | Description | |:|:|
There are two primary tools to view, analyze and optimize your Azure Monitor cos
## Azure Cost Management + Billing
-To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. This tool includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
+To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. This tool includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
>[!NOTE] >You might need additional access to use Cost Management data. See [Assign access to Cost Management data](../cost-management-billing/costs/assign-access-acm-data.md).
To limit the view to Azure Monitor charges, [create a filter](../cost-management
- Insight and Analytics - Application Insights
-Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
+Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
>[!NOTE]
Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also
### Automated mails and alerts Rather than manually analyzing your costs in the Azure portal, you can automate delivery of information using the following methods. -- **Daily cost analysis emails.** Once you've configured your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
- - **Budget alerts.** To be notified if there are significant increases in your spending, create a [budget alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) for a single workspace or group of workspaces.
+- **Daily cost analysis emails.** After you configure your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
+- **Budget alerts.** To be notified if there are significant increases in your spending, create a [budget alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) for a single workspace or group of workspaces.
### Export usage details To gain deeper understanding of your usage and costs, create exports using **Cost Analysis**. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily export you can use for regular analysis.
-These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, [billing meter](cost-meters.md), and several other fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the **Cost Analytics** experiences in the portal.
+These exports are in CSV format and contain a list of daily usage (billed quantity and cost) by resource, [billing meter](cost-meters.md), and several other fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the **Cost Analytics** experiences in the portal.
-For example, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show
+For example, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show:
1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention), 2. **Insight and Analytics** (used by some of the legacy pricing tiers), and
Add a filter on the **Instance ID** column for **contains workspace** or **conta
## View data allocation benefits
-There are several approaches to view the benefits a workspace receives from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+There are several approaches to view the benefits a workspace receives from offers that are part of other products. These offers are:
+
+1. [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and
+
+1. [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
### View benefits in a usage export
-Since a usage export has both the number of units of usage and their cost, you can use this export to see the amount of benefits you are receiving. In the usage export, to see the benefits, filter the *Instance ID* column to your workspace. (To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".) Then filter on the Meter to either of the following two meters:
+Since a usage export has both the number of units of usage and their cost, you can use this export to see the benefits you're receiving. In the usage export, to see the benefits, filter the *Instance ID* column to your workspace. (To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".) Then filter on the Meter to either of the following 2 meters:
-- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled. Each of these provide a 500 MB/server/day data allowance.-- **Free Benefit - M365 Defender Data Ingestion**: this meter, under the service "Azure Monitor", tracks the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled. Each of these allowances provide a 500 MB/server/day data allowance.
+
+- **Free Benefit - M365 Defender Data Ingestion**: this meter, under the service "Azure Monitor", tracks the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
### View benefits in Usage and estimated costs
-You can also see these data benefits in the Log Analytics Usage and estimated costs page. If the workspace is receiving these benefits, there will be a sentence below the cost estimate table that gives the data volume of the benefits used over the last 31 days.
+You can also see these data benefits in the Log Analytics Usage and estimated costs page. If the workspace is receiving these benefits, there's a sentence below the cost estimate table that gives the data volume of the benefits used over the last 31 days.
:::image type="content" source="media/cost-usage/log-analytics-workspace-benefit.png" lightbox="media/cost-usage/log-analytics-workspace-benefit.png" alt-text="Screenshot of monthly usage with benefits from Defender and Sentinel offers."::: ### Query benefits from the Operation table
-The [Operation](/azure/azure-monitor/reference/tables/operation) table contains daily events which given the amount of benefit used from the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). The `Detail` column for these events are all of the format `Benefit amount used 1.234 GB`, and the type of benefit is in the `OperationKey` column. Here is a query that charts the benefits used in the last 31-days:
+The [Operation](/azure/azure-monitor/reference/tables/operation) table contains daily events which given the amount of benefit used from the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). The `Detail` column for these events is in the format `Benefit amount used 1.234 GB`, and the type of benefit is in the `OperationKey` column. Here's a query that charts the benefits used in the last 31-days:
```kusto Operation
Operation
> ## Usage and estimated costs
-You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
+You can get more usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
### Log Analytics workspace To learn about your usage trends and optimize your costs using the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and Estimated Costs** from the **Log Analytics workspace** menu in the Azure portal. :::image type="content" source="media/cost-usage/usage-estimated-cost-dashboard-01.png" lightbox="media/cost-usage/usage-estimated-cost-dashboard-01.png" alt-text="Screenshot of usage and estimated costs screen in Azure portal.":::
-This view includes the following:
+This view includes the following sections:
A. Estimated monthly charges based on usage from the past 31 days using the current pricing tier.<br> B. Estimated monthly charges using different commitment tiers.<br>
Customers who purchased Microsoft Operations Management Suite E1 and E2 are elig
To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
-Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration.
--
-Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this change in pricing tier requires careful consideration.
> [!TIP] > If your organization has Microsoft Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per-Node (OMS) pricing tier and your Application Insights resources in the Enterprise pricing tier. >
+## Azure Migrate data benefits
+
+Workspaces linked to [classic Azure Migrate](/azure/migrate/migrate-services-overview#azure-migrate-versions) receive free data benefits for the data tables related to Azure Migrate (`ServiceMapProcess_CL`, `ServiceMapComputer_CL`, `VMBoundPort`, `VMConnection`, `VMComputer`, `VMProcess`, `InsightsMetrics`). This version of Azure Migrate was retired in February 2024.
+
+Starting from 1 July 2024, the data benefit for Azure Migrate in Log Analytics will no longer be available. We suggest moving to the [Azure Migrate agentless dependency analysis](/azure/migrate/how-to-create-group-machine-dependencies-agentless). If you continue with agent-based dependency analysis, standard [Azure Monitor charges](https://azure.microsoft.com/pricing/details/monitor/) will apply for the data ingestion that enables dependency visualization.
+ ## Next steps - See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
azure-monitor Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md
Code Optimizations analyzes the profiling data collected by the Application Insi
## Cost
-While Code Optimizations incurs no extra costs, you may encounter [indirect costs associated with Application Insights](../best-practices-cost.md#is-application-insights-free).
+While Code Optimizations incurs no extra costs.
## Supported regions
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
In some scenarios, combining this data can result in cost savings. Typically, th
- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent) - [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus) - [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled.
+- [MDCFileIntegrityMonitoringEvents](/azure/azure-monitor/reference/tables/mdcfileintegritymonitoringevents)
If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data. To learn more on how Microsoft Sentinel customers can benefit, please see the [Microsoft Sentinel Pricing page](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Provide the following properties when creating new dedicated cluster:
- **ClusterName**: Must be unique for the resource group. - **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review [Design a Log Analytics workspace configuration](../logs/workspace-design.md). - **Location**-- **SkuCapacity**: You can set the commitment tier to 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 25000, 50000 GB per day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
+- **SkuCapacity**: You can set the commitment tier to 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 25000, 50000 GB per day. The minimum commitment tier supported in CLI is 500 currently. Use REST to configure lower commitment tiers with minimum of 100. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
- **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): - System-assigned managed identity - Generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations.
Authorization: Bearer <token>
### Cluster Get
+- 404--Cluster not found, the cluster might have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in deletion process.
### Cluster Delete
+- 409--Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
### Workspace link
azure-monitor Monitor Azure Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor-reference.md
+
+ Title: Monitoring data reference for Azure Monitor
+description: This article contains important reference material you need when you monitor Azure Monitor.
Last updated : 03/31/2024+++++++
+# Azure Monitor monitoring data reference
++
+See [Monitor Azure Monitor](monitor-azure-monitor.md) for details on the data you can collect for Azure Monitor and how to use it.
+
+<!-- ## Metrics. Required section. -->
+
+<!-- Repeat the following section for each resource type/namespace in your service. For each ### section, replace the <ResourceType/namespace> placeholder, add the metrics-tableheader #include, and add the table #include.
+
+To add the table #include, find the table(s) for the resource type in the Metrics column at https://review.learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-metrics/metrics-index?branch=main#supported-metrics-and-log-categories-by-resource-type, which is autogenerated from underlying systems. -->
+
+### Supported metrics for Microsoft.Monitor/accounts
+The following table lists the metrics available for the Microsoft.Monitor/accounts resource type.
+
+### Supported metrics for microsoft.insights/autoscalesettings
+The following table lists the metrics available for the microsoft.insights/autoscalesettings resource type.
+
+### Supported metrics for microsoft.insights/components
+The following table lists the metrics available for the microsoft.insights/components resource type.
+
+### Supported metrics for Microsoft.Insights/datacollectionrules
+The following table lists the metrics available for the Microsoft.Insights/datacollectionrules resource type.
+
+### Supported metrics for Microsoft.operationalinsight/workspaces
+
+Azure Monitor Logs / Log Analytics workspaces
++
+<!-- ## Metric dimensions. Required section. -->
++
+Microsoft.Monitor/accounts:
+
+- `Stamp color`
+
+microsoft.insights/autoscalesettings:
+
+- `MetricTriggerRule`
+- `MetricTriggerSource`
+- `ScaleDirection`
+
+microsoft.insights/components:
+
+- `availabilityResult/name`
+- `availabilityResult/location`
+- `availabilityResult/success`
+- `dependency/type`
+- `dependency/performanceBucket`
+- `dependency/success`
+- `dependency/target`
+- `dependency/resultCode`
+- `operation/synthetic`
+- `cloud/roleInstance`
+- `cloud/roleName`
+- `client/isServer`
+- `client/type`
+
+Microsoft.Insights/datacollectionrules:
+
+- `InputStreamId`
+- `ResponseCode`
+- `ErrorType`
++
+### Supported resource logs for Microsoft.Monitor/accounts
+
+### Supported resource logs for microsoft.insights/autoscalesettings
+
+### Supported resource logs for microsoft.insights/components
+
+### Supported resource logs for Microsoft.Insights/datacollectionrules
++
+### Application Insights
+microsoft.insights/components
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns)
+- [AppAvailabilityResults](/azure/azure-monitor/reference/tables/AppAvailabilityResults#columns)
+- [AppBrowserTimings](/azure/azure-monitor/reference/tables/AutoscaleScaleActionsLog#columns)
+- [AppDependencies](/azure/azure-monitor/reference/tables/AppDependencies#columns)
+- [AppEvents](/azure/azure-monitor/reference/tables/AppEvents#columns)
+- [AppPageViews](/azure/azure-monitor/reference/tables/AppPageViews#columns)
+- [AppPerformanceCounters](/azure/azure-monitor/reference/tables/AppPerformanceCounters#columns)
+- [AppRequests](/azure/azure-monitor/reference/tables/AppRequests#columns)
+- [AppSystemEvents](/azure/azure-monitor/reference/tables/AppSystemEvents#columns)
+- [AppTraces](/azure/azure-monitor/reference/tables/AppTraces#columns)
+- [AppExceptions](/azure/azure-monitor/reference/tables/AppExceptions#columns)
+
+### Azure Monitor autoscale settings
+Microsoft.Insights/AutoscaleSettings
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns)
+- [AutoscaleEvaluationsLog](/azure/azure-monitor/reference/tables/AutoscaleEvaluationsLog#columns)
+- [AutoscaleScaleActionsLog](/azure/azure-monitor/reference/tables/AutoscaleScaleActionsLog#columns)
+
+### Azure Monitor Workspace
+Microsoft.Monitor/accounts
+
+- [AMWMetricsUsageDetails](/azure/azure-monitor/reference/tables/AMWMetricsUsageDetails#columns)
+
+### Data Collection Rules
+Microsoft.Insights/datacollectionrules
+
+- [DCRLogErrors](/azure/azure-monitor/reference/tables/DCRLogErrors#columns)
+
+### Workload Monitoring of Azure Monitor Insights
+Microsoft.Insights/WorkloadMonitoring
+
+- [InsightsMetrics](/azure/azure-monitor/reference/tables/InsightsMetrics#columns)
+
+- [Monitor resource provider operations](/azure/role-based-access-control/resource-provider-operations#monitor)
+
+## Related content
+
+- See [Monitor Azure Monitor](monitor-azure-monitor.md) for a description of monitoring Azure Monitor.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
Title: Monitoring Azure Monitor
-description: Learn about how Azure Monitor monitors itself
+ Title: Monitor Azure Monitor
+description: Start here to learn how to monitor Azure Monitor.
Last updated : 03/31/2024++ - - Previously updated : 04/07/2022-
-<!-- VERSION 2.2-->
+# Monitor Azure Monitor
-# Monitoring Azure Monitor
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+Azure Monitor has many separate larger components. Information on monitoring each of these components follows.
-This article describes the monitoring data generated by Azure Monitor. Azure Monitor uses [itself](./overview.md) to monitor certain parts of its own functionality. You can monitor:
+## Azure Monitor core
-- Autoscale operations-- Monitoring operations in the audit log
+**Autoscale** - Azure Monitor Autoscale has a diagnostics feature that provides insights into the performance of your autoscale settings. For more information, see [Azure Monitor Autoscale diagnostics](autoscale/autoscale-diagnostics.md) and [Troubleshooting using autoscale metrics](autoscale/autoscale-troubleshoot.md#autoscale-metrics).
- If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md).
+**Agent Monitoring** - You can now monitor the health of your agents easily and seamlessly across Azure, on premises and other clouds using this interactive experience. For more information, see [Azure Monitor Agent Health](agents/azure-monitor-agent-health.md).
-For an overview showing where autoscale and the audit log fit into Azure Monitor, see [Introduction to Azure Monitor](overview.md).
+**Data Collection Rules(DCRs)** - Use [detailed metrics and log](essentials/data-collection-monitor.md) to monitor the performance of your DCRs.
-## Monitoring overview page in Azure portal
+## Azure Monitor Logs and Log Analytics
-The **Overview** page in the Azure portal for Azure Monitor shows links and tutorials on how to use Azure Monitor in general. It doesn't mention any of the specific resources discussed later in this article.
+**[Log Analytics Workspace Insights](logs/log-analytics-workspace-insights-overview.md)** provides a dashboard that shows you the volume of data going through your workspace(s). You can calculate the cost of your workspace based on the data volume.
+
+**[Log Analytics workspace health](logs/log-analytics-workspace-health.md)** provides a set of queries that you can use to monitor the health of your workspace.
-## Monitoring data
+**Optimizing and troubleshooting log queries** - Sometimes Azure Monitor KQL Log queries can take more time to run than needed or never return at all. By monitoring the various aspects of the query, you can troubleshoot and optimize them. For more information, see [Audit queries in Azure Monitor Logs](logs/query-audit.md) and [Optimize log queries](logs/query-optimization.md).
-Azure Monitor collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](./essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
+**Log Ingestion pipeline latency** - Azure Monitor provides a highly scalable log ingestion pipeline that can ingest logs from any source. You can monitor the latency of this pipeline using Kusto queries. For more information, see [Log data ingestion time in Azure Monitor](logs/data-ingestion-time.md#check-ingestion-time).
-See [Monitoring *Azure Monitor* data reference](azure-monitor-monitoring-reference.md) for detailed information on the metrics and logs metrics created by Azure Monitor.
+**Log Analytics usage** - You can monitor the data ingestion for your Log Analytics workspace. For more information, see [Analyze usage in Log Analytics](logs/analyze-usage.md).
-## Collection and routing
+## All resources
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+**Health of any Azure resource** - Azure Monitor resources are tied into the resource health feature, which provides insights into the health of any Azure resource. For more information, see [Resource health](/azure/service-health/resource-health-overview/).
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure Monitor* are listed in [Azure Monitor monitoring data reference](azure-monitor-monitoring-reference.md#resource-logs).
-The metrics and logs you can collect are discussed in the following sections.
+For more information about the resource types for Azure Monitor, see [Azure Monitor monitoring data reference](monitor-azure-monitor-reference.md).
-## Analyzing metrics
-You can analyze metrics for *Azure Monitor* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](./essentials/analyze-metrics.md) for details on using this tool.
-For a list of the platform metrics collected for Azure Monitor into itself, see [Azure Monitor monitoring data reference](azure-monitor-monitoring-reference.md#metrics).
+For a list of available metrics for Azure Monitor, see [Azure Monitor monitoring data reference](monitor-azure-monitor-reference.md#metrics).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](./essentials/metrics-supported.md).
-<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Monitor, see [Azure Monitor monitoring data reference](monitor-azure-monitor-reference.md#resource-logs).
-## Analyzing logs
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](./essentials/resource-logs-schema.md) The schemas for autoscale resource logs are found in the [Azure Monitor Data Reference](azure-monitor-monitoring-reference.md#resource-logs)
-The [Activity log](./essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-For a list of the types of resource logs collected for Azure Monitor, see [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md#resource-logs).
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md#azure-monitor-logs-tables)
+Refer to the links in the beginning of this article for specific Kusto queries for each of the Azure Monitor components.
-### Sample Kusto queries
-These are now listed in the [Log Analytics user interface](./logs/queries.md).
+## Related content
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](./alerts/alerts-metric-overview.md), [logs](./alerts/alerts-types.md#log-alerts), and the [activity log](./alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-
-For an in-depth discussion of using alerts with autoscale, see [Troubleshoot Azure autoscale](./autoscale/autoscale-troubleshoot.md).
-
-## Next steps
--- See [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md) for a reference of the metrics, logs, and other important values created by Azure Monitor to monitor itself.-- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Azure Monitor monitoring data reference](monitor-azure-monitor-reference.md) for a reference of the metrics, logs, and other important values created for Azure Monitor.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
using Microsoft.ApplicationInsights.SnapshotCollector;
builder.Services.Configure<SnapshotCollectorConfiguration>(builder.Configuration.GetSection("SnapshotCollector")); ```
-Next, add a `SnapshotCollector` section to *appsettings.json* where you can override the defaults. The following example shows a configuration equivalent to the default configuration:
+Next, add a `SnapshotCollector` section to _appsettings.json_ where you can override the defaults. The following example shows a configuration equivalent to the default configuration:
```json {
Next, add a `SnapshotCollector` section to *appsettings.json* where you can over
} ```
-If you need to customize the Snapshot Collector's behavior manually, without using *appsettings.json*, use the overload of `AddSnapshotCollector` that takes a delegate. For example:
+If you need to customize the Snapshot Collector's behavior manually, without using _appsettings.json_, use the overload of `AddSnapshotCollector` that takes a delegate. For example:
```csharp builder.Services.AddSnapshotCollector(config => config.IsEnabledInDeveloperMode = true); ```
builder.Services.AddSnapshotCollector(config => config.IsEnabledInDeveloperMode
Snapshots are collected only on exceptions that are reported to Application Insights. For ASP.NET and ASP.NET Core applications, the Application Insights SDK automatically reports unhandled exceptions that escape a controller method or endpoint route handler. For other applications, you might need to modify your code to report them. The exception handling code depends on the structure of your application. Here's an example: ```csharp
-TelemetryClient _telemetryClient = new TelemetryClient();
-void ExampleRequest()
+using Microsoft.ApplicationInsights;
+using Microsoft.ApplicationInsights.DataContracts;
+using Microsoft.ApplicationInsights.Extensibility;
+
+internal class ExampleService
{
+ private readonly TelemetryClient _telemetryClient;
+
+ public ExampleService(TelemetryClient telemetryClient)
+ {
+ // Obtain the TelemetryClient via dependency injection.
+ _telemetryClient = telemetryClient;
+ }
+
+ public void HandleExampleRequest()
+ {
+ using IOperationHolder<RequestTelemetry> operation =
+ _telemetryClient.StartOperation<RequestTelemetry>("Example");
try {
- // TODO: Handle the request.
+ // TODO: Handle the request.
+ operation.Telemetry.Success = true;
} catch (Exception ex) {
- // Report the exception to Application Insights.
- _telemetryClient.TrackException(ex);
- // TODO: Rethrow the exception if desired.
+ // Report the exception to Application Insights.
+ operation.Telemetry.Success = false;
+ _telemetryClient.TrackException(ex);
+ // TODO: Rethrow the exception if desired.
}
+ }
} ```
+The following example uses `ILogger` instead of `TelemetryClient`. This example assumes you're using the [Application Insights Logger Provider](../app/ilogger.md#console-application). As the example shows, when handling an exception, be sure to pass the exception as the first parameter to `LogError`.
+
+```csharp
+using Microsoft.Extensions.Logging;
+
+internal class LoggerExample
+{
+ private readonly ILogger _logger;
+
+ public LoggerExample(ILogger<LoggerExample> logger)
+ {
+ _logger = logger;
+ }
+
+ public void HandleExampleRequest()
+ {
+ using IDisposable scope = _logger.BeginScope("Example");
+ try
+ {
+ // TODO: Handle the request
+ }
+ catch (Exception ex)
+ {
+ // Use the LogError overload with an Exception as the first parameter.
+ _logger.LogError(ex, "An error occurred.");
+ }
+ }
+}
+```
+
+> [!NOTE]
+> By default, the Application Insights Logger (`ApplicationInsightsLoggerProvider`) forwards exceptions to the Snapshot Debugger via `TelemetryClient.TrackException`. This behavior is controlled via the `TrackExceptionsAsExceptionTelemetry` property on the `ApplicationInsightsLoggerOptions` class. If you set `TrackExceptionsAsExceptionTelemetry` to `false` when configuring the Application Insights Logger, then the preceding example will not trigger the Snapshot Debugger. In this case, modify your code to call `TrackException` manually.
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Next steps - Generate traffic to your application that can trigger an exception. Then wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. - See [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
+- [Troubleshoot](snapshot-debugger-troubleshoot.md) Snapshot Debugger problems.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation"
description: "What's new in Azure Monitor documentation" Previously updated : 02/08/2024 Last updated : 04/04/2024
This article lists significant changes to Azure Monitor documentation.
## [2024](#tab/2024)
+## March 2024
+
+|Subservice | Article | Description |
+||||
+|Alerts|[Improve the reliability of your application by using Azure Advisor](../../articles/advisor/advisor-high-availability-recommendations.md)|WeΓÇÖve updated the alerts troubleshooting articles to remove out of date content and include common support issues.|
+|Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python, and Java applications](app/opentelemetry-enable.md)|OpenTelemetry sample applications are now provided in a centralized location.|
+|Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|Classic Application Insights resources have been retired. For more information, see this article for migration information and frequently asked questions.|
+|Application-Insights|[Sampling overrides - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)|The sampling overrides feature has reached general availability (GA), starting from 3.5.0.|
+|Containers|[Configure data collection and cost optimization in Container insights using data collection rule](containers/container-insights-data-collection-dcr.md)|Updated to include new Logs and Events cost preset.|
+|Containers|[Enable private link with Container insights](containers/container-insights-private-link.md)|Updated with ARM templates.|
+|Essentials|[Data collection rules in Azure Monitor](essentials/data-collection-rule-overview.md)|Rewritten to consolidate previous data collection article.|
+|Essentials|[Workspace transformation data collection rule (DCR) in Azure Monitor](essentials/data-collection-transformations-workspace.md)|Content moved to a new article dedicated to workspace transformation DCR.|
+|Essentials|[Data collection transformations in Azure Monitor](essentials/data-collection-transformations.md)|Rewritten to remove redundancy and make the article more consistent with related articles.|
+|Essentials|[Create and edit data collection rules (DCRs) in Azure Monitor](essentials/data-collection-rule-create-edit.md)|Updated API version in REST API calls.|
+|Essentials|[Tutorial: Edit a data collection rule (DCR)](essentials/data-collection-rule-edit.md)|Updated API version in REST API calls.|
+|Essentials|[Monitor and troubleshoot DCR data collection in Azure Monitor](essentials/data-collection-monitor.md)|New article documenting new DCR monitoring feature.|
+|Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|Added new metrics for monitoring data export from a Log Analytics workspace.|
+|Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Azure Databricks logs tables now support the basic logs data plan.|
+ ## February 2024 |Subservice | Article | Description |
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
# What is Azure NetApp Files?
-Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, and volumes. You can also select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and enterprise applications that rely on on-premises.
+Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, and volumes. You can also select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and rely on on-premises.
Key attributes of Azure NetApp Files are:
azure-netapp-files Azure Netapp Files Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
> This article addresses performance considerations for *regular volumes* only. > For *large volumes*, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md#requirements-and-considerations).
-The combination of the quota assigned to the volume and the selected service level determines the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS . For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
+The combination of the quota assigned to the volume and the selected service level determines the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS. For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
## Quota and throughput
Typical storage performance considerations contribute to the total performance d
Metrics are reported as aggregates of multiple data points collected during a five-minute interval. For more information about metrics aggregation, see [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md).
-The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB will provision a throughput limit that is high enough to achieve this level of performance.
+The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB provisions a throughput limit high enough to achieve this performance level.
-For automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing more data. However, the added quota doesn't result in a further increase in actual throughput.
+For automatic QoS volumes, if you're considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing more data. However, the added quota doesn't result in a further increase in actual throughput.
The same empirical throughput ceiling applies to volumes with manual QoS. The maximum throughput can assign to a volume is 4,500 MiB/s. ## Automatic QoS volume quota and throughput
-This section describes quota management and throughput for volumes with the automatic QoS type.
+Learn about quota management and throughput for volumes with the automatic QoS type.
### Overprovisioning the volume quota
-If a workloadΓÇÖs performance is throughput-limit bound, it is possible to overprovision the automatic QoS volume quota to set a higher throughput level and achieve higher performance.
+If a workloadΓÇÖs performance is throughput-limit bound, it's possible to overprovision the automatic QoS volume quota to set a higher throughput level and achieve higher performance.
-For example, if an automatic QoS volume in the Premium storage tier has only 500 GiB of data but requires 128 MiB/s of throughput, you can set the quota to 2 TiB so that the throughput level is set accordingly (64 MiB/s per TB * 2 TiB = 128 MiB/s).
+For example, if an automatic QoS volume in the Premium storage tier has only 500 GiB of data but requires 128 MiB/s of throughput, you can set the quota to 2 TiB so the throughput level is set accordingly (64 MiB/s per TB * 2 TiB = 128 MiB/s).
-If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In this example, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
+If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In this example, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
### Dynamically increasing or decreasing volume quota
If your performance requirements are temporary in nature, or if you have increas
If you use manual QoS volumes, you donΓÇÖt have to overprovision the volume quota to achieve a higher throughput because the throughput can be assigned to each volume independently. However, you still need to ensure that the capacity pool is pre-provisioned with sufficient throughput for your performance needs. The throughput of a capacity pool is provisioned according to its size and service level. See [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) for more details.
+## Monitoring volumes for performance
+
+Azure NetApp Files volumes can be monitored using available [Performance metrics](azure-netapp-files-metrics.md#performance-metrics-for-volumes).
+
+When volume throughput reaches its maximum (as determined by the QoS setting), the volume response times (latency) increase. This effect can be incorrectly perceived as a performance issue caused by the storage. Increasing the volume QoS setting (manual QoS) or increasing the volume size (auto QoS) increases the allowable volume throughput.
+
+To check if the maximum throughput limit has been reached, monitor the metric [Throughput limit reached](azure-netapp-files-metrics.md#volumes). For more recommendations, see [Performance FAQs for Azure NetApp Files](faq-performance.md#what-should-i-do-to-optimize-or-tune-azure-netapp-files-performance).
## Next steps
azure-netapp-files Faq Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-performance.md
You can take the following actions per the performance requirements:
There is no need to set accelerated networking for the NICs in the dedicated subnet of Azure NetApp Files. [Accelerated networking](../virtual-network/virtual-machine-network-throughput.md) is a capability that only applies to Azure virtual machines. Azure NetApp Files NICs are optimized by design.
+## How do I monitor Azure NetApp Files volume performance
+
+Azure NetApp Files volumes performance can be monitored through [available metrics](azure-netapp-files-metrics.md).
+ ## How do I convert throughput-based service levels of Azure NetApp Files to IOPS? You can convert MB/s to IOPS by using the following formula:
No, Azure NetApp Files does not support SMB Direct.
## Is NIC Teaming supported in Azure?
-NIC Teaming is not supported in Azure. Although multiple network interfaces are supported on Azure virtual machines, they represent a logical rather than a physical construct. As such, they provide no fault tolerance. Also, the bandwidth available to an Azure virtual machine is calculated for the machine itself and not any individual network interface.
+NIC Teaming isn't supported in Azure. Although multiple network interfaces are supported on Azure virtual machines, they represent a logical rather than a physical construct. As such, they provide no fault tolerance. Also, the bandwidth available to an Azure virtual machine is calculated for the machine itself and not any individual network interface.
## Are jumbo frames supported?
-Jumbo frames are not supported with Azure virtual machines.
+Jumbo frames aren't supported with Azure virtual machines.
## Next steps
azure-portal Microsoft Entra Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/microsoft-entra-id.md
Title: Use Microsoft Entra ID with the Azure mobile app description: Use the Azure mobile app to manage users and groups with Microsoft Entra ID. Previously updated : 03/08/2024 Last updated : 04/04/2024
The Azure mobile app provides access to Microsoft Entra ID. You can perform task
To access Microsoft Entra ID, open the Azure mobile app and sign in with your Azure account. From **Home**, scroll down to select the **Microsoft Entra ID** card. > [!NOTE]
-> Your account must have the appropriate permissions in order to perform these tasks. For example, to invite a user to your tenant, you must have a role that includes this permission, such as [Guest Inviter](/entra/identity/role-based-access-control/permissions-reference) role or [User Administrator](/entra/identity/role-based-access-control/permissions-reference).
+> Your account must have the appropriate permissions in order to perform these tasks. For example, to invite a user to your tenant, you must have a role that includes this permission, such as [Guest Inviter](/entra/identity/role-based-access-control/permissions-reference) or [User Administrator](/entra/identity/role-based-access-control/permissions-reference).
## Invite a user to the tenant
To add one or more users to a group from the Azure mobile app:
1. Search or scroll to find the desired group, then tap to select it. 1. On the **Members** card, select **See All**. The current list of members is displayed. 1. Select the **+** icon in the top right corner.
-1. Search or scroll to find users you want to add to the group, then select the user(s) by tapping the circle next to their name.
-1. Select **Add** in the top right corner to add the selected users(s) to the group.
+1. Search or scroll to find users you want to add to the group, then select one or more users by tapping the circle next to their name.
+1. Select **Add** in the top right corner to add the selected users to the group.
## Add group memberships for a specified user
You can also add a single user to one or more groups in the **Users** section of
1. In **Microsoft Entra ID**, select **Users**, then search or scroll to find and select the desired user. 1. On the **Groups** card, select **See All** to display all current group memberships for that user. 1. Select the **+** icon in the top right corner.
-1. Search or scroll to find groups to which this user should be added, then select the group(s) by tapping the circle next to the group name.
-1. Select **Add** in the top right corner to add the user to the selected group(s).
+1. Search or scroll to find groups to which this user should be added, then select one or more groups by tapping the circle next to the group name.
+1. Select **Add** in the top right corner to add the user to the selected groups.
## Manage authentication methods or reset password for a user
-To [manage authentication methods](/entra/identity/authentication/concept-authentication-methods-manage) or [reset a user's password](/entra/fundamentals/users-reset-password-azure-portal), you need to do the following steps:
+To [manage authentication methods](/entra/identity/authentication/concept-authentication-methods-manage) or [reset a user's password](/entra/fundamentals/users-reset-password-azure-portal):
1. In **Microsoft Entra ID**, select **Users**, then search or scroll to find and select the desired user. 1. On the **Authentication methods** card, select **Manage**.
-1. Select **Reset password** to assign a temporary password to the user, or **Authentication methods** to manage to Tap on the desired user, then tap on ΓÇ£Reset passwordΓÇ¥ or ΓÇ£Authentication methodsΓÇ¥ based on your permissions.
+1. Select **Reset password** to assign a temporary password to the user, or **Authentication methods** to manage authentication methods for self-service password reset.
> [!NOTE] > You won't see the **Authentication methods** card if you don't have the appropriate permissions to manage authentication methods and/or password changes for a user.
+## Investigate risky users and sign-ins
+
+[Microsoft Entra ID Protection](/entra/id-protection/overview-identity-protection) provides organizations with reporting they can use to [investigate identity risks in their environment](/entra/id-protection/howto-identity-protection-investigate-risk).
+
+If you have the [necessary permissions and license](/entra/id-protection/overview-identity-protection#required-roles), you'll see details in the **Risky users** and **Risky sign-ins** sections within **Microsoft Entra ID**. You can open these sections to view more information and perform some management tasks.
+
+### Manage risky users
+
+1. In **Microsoft Entra ID**, scroll down to the **Security** card and then select **Risky users**.
+1. Search or scroll to find and select a specific risky user.
+1. Review basic information for this user, a list of their risky sign-ins, and their risk history.
+1. To [take action on the user](/entra/id-protection/howto-identity-protection-investigate-risk), select the three dots near the top of the screen. You can:
+
+ * Reset the user's password
+ * Confirm user compromise
+ * Dismiss user risk
+ * Block the user from signing in (or unblock, if previously blocked)
+
+### Monitor risky sign-ins
+
+1. In **Microsoft Entra ID**, scroll down to the **Security** card and then select **Risky sign-ins**. It may take a minute or two for the list of all risky sign-ins to load.
+
+1. Search or scroll to find and select a specific risky sign-in.
+
+1. Review details about the risky sign-in.
+ ## Activate Privileged Identity Management (PIM) roles If you have been made eligible for an administrative role through Microsoft Entra Privileged Identity Management (PIM), you must activate the role assignment when you need to perform privileged actions. This activation can be done from within the Azure mobile app.
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Vaulted backup for Azure Files (preview) is available in West Central US, Southe
| File share type | Support | | -- | |
-| Standard | Supported |
+| Standard (with large file shares enabled) | Supported |
| Large | Supported | | Premium | Supported | | File shares connected with Azure File Sync service | Supported |
backup Azure Kubernetes Service Cluster Backup Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-powershell.md
A Backup vault is a management entity in Azure that stores backup data for vario
Here, we're creating a Backup vault *TestBkpVault* in *West US* region under the resource group *testBkpVaultRG*. Use the `New-AzDataProtectionBackupVault` cmdlet to create a Backup vault. Learn more about [creating a Backup vault](create-manage-backup-vault.md#create-a-backup-vault).
->[!Note]
->Though the selected vault may have the *global-redundancy* setting, backup for AKS currently supports **Operational Tier** only. All backups are stored in your subscription in the same region as that of the AKS cluster, and they aren't copied to Backup vault storage.
+> [!NOTE]
+> Though the selected vault may have the *global-redundancy* setting, backup for AKS currently supports **Operational Tier** only. All backups are stored in your subscription in the same region as that of the AKS cluster, and they aren't copied to Backup vault storage.
1. To define the storage settings of the Backup vault, run the following cmdlet:
- >[!Note]
- >The vault is created with only *Local Redundancy* and *Operational Data store* support.
+ > [!NOTE]
+ > The vault is created with only *Local Redundancy* and *Operational Data store* support.
```azurepowershell $storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant -DataStoreType OperationalStore
Backup for AKS provides multiple backups per day. The backups are equally distri
If *once a day backup* is sufficient, then choose the *Daily backup frequency*. In the daily backup frequency, you can specify the *time of the day* when your backups should be taken.
->[!Important]
->The time of the day indicates the backup start time and not the time when the backup completes. The time required for completing the backup operation is dependent on various factors, including number and size of the persistent volumes and churn rate between consecutive backups.
+> [!IMPORTANT]
+> The time of the day indicates the backup start time and not the time when the backup completes. The time required for completing the backup operation is dependent on various factors, including number and size of the persistent volumes and churn rate between consecutive backups.
If you want to edit the hourly frequency or the retention period, use the `Edit-AzDataProtectionPolicyTriggerClientObject` and/or `Edit-AzDataProtectionPolicyRetentionRuleClientObject` cmdlets. Once the policy object has all the required values, start creating a new policy from the policy object using the `New-AzDataProtectionBackupPolicy` cmdlet.
Once the vault and policy creation are complete, you need to perform the followi
To create a new storage account and a blob container, see [these steps](../storage/blobs/blob-containers-powershell.md#create-a-container).
- >[!Note]
- >1. The storage account and the AKS cluster should be in the same region and subscription.
- >2. The blob container shouldn't contain any previously created file systems (except created by backup for AKS).
- >3. If your source or target AKS cluster is in a private virtual network, then you need to create Private Endpoint to connect storage account with the AKS cluster.
+ > [!NOTE]
+ > 1. The storage account and the AKS cluster should be in the same region and subscription.
+ > 2. The blob container shouldn't contain any previously created file systems (except created by backup for AKS).
+ > 3. If your source or target AKS cluster is in a private virtual network, then you need to create Private Endpoint to connect storage account with the AKS cluster.
2. **Install Backup Extension**
Once the vault and policy creation are complete, you need to perform the followi
3. **Enable Trusted Access**
- For the Backup vault to connect with the AKS cluster, you must enable Trusted Access as it allows the Backup vault to have a direct line of sight to the AKS cluster. Learn [how to enable Trusted Access]](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations).
+ For the Backup vault to connect with the AKS cluster, you must enable Trusted Access as it allows the Backup vault to have a direct line of sight to the AKS cluster. Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations).
->[!Note]
->For Backup Extension installation and Trusted Access enablement, the commands are available in Azure CLI only.
+> [!NOTE]
+> For Backup Extension installation and Trusted Access enablement, the commands are available in Azure CLI only.
## Configure backups
backup Backup Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-files.md
Title: Back up Azure File shares in the Azure portal description: Learn how to use the Azure portal to back up Azure File shares in the Recovery Services vault Previously updated : 03/04/2024 Last updated : 04/05/2024
Azure File share backup is a native, cloud based backup solution that protects y
## Prerequisites
-* Ensure that the file share is present in one of the [supported storage account types](azure-file-share-support-matrix.md).
+* Ensure that the file share is present in one of the supported storage account types. Review the [support matrix](azure-file-share-support-matrix.md).
* Identify or create a [Recovery Services vault](#create-a-recovery-services-vault) in the same region and subscription as the storage account that hosts the file share. * In case you have restricted access to your storage account, check the firewall settings of the account to ensure that the exception "Allow Azure services on the trusted services list to access this storage account" is granted. You can refer to [this](../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) link for the steps to grant an exception.
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
Title: Troubleshoot Agent and extension issues description: Symptoms, causes, and resolutions of Azure Backup failures related to agent, extension, and disks. Previously updated : 05/05/2022 Last updated : 04/08/2024 --++
Check if the given virtual machine is actively (not in pause state) protected by
The VM agent might have been corrupted, or the service might have been stopped. Reinstalling the VM agent helps get the latest version. It also helps restart communication with the service. 1. Determine whether the Windows Azure Guest Agent service is running in the VM services (services.msc). Try to restart the Windows Azure Guest Agent service and initiate the backup.+
+ :::image type="content" source="./media/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout/open-services-window.png" alt-text="Screenshot shows how to open Windows Services." lightbox="./media/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout/open-services-window.png":::
+
+ :::image type="content" source="./media/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout/windows-azure-guest-service-running.png" alt-text="Screenshot shows the Windows Azure Guest service is in running state." lightbox="./media/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout/windows-azure-guest-service-running.png":::
+ 2. If the Windows Azure Guest Agent service isn't visible in services, in Control Panel, go to **Programs and Features** to determine whether the Windows Azure Guest Agent service is installed. 3. If the Windows Azure Guest Agent appears in **Programs and Features**, uninstall the Windows Azure Guest Agent. 4. Download and install the [latest version of the agent MSI](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409). You must have Administrator rights to complete the installation.
The following conditions might cause the snapshot task to fail:
3. In the **Settings** section, select **Locks** to display the locks. 4. To remove the lock, select the ellipsis and select **Delete**.
- ![Delete lock](./media/backup-azure-arm-vms-prepare/delete-lock.png)
+ :::image type="content" source="./media/backup-azure-arm-vms-prepare/delete-lock.png" alt-text="Screenshot shows how to delete a lock." lightbox="./media/backup-azure-arm-vms-prepare/delete-lock.png":::
### <a name="clean_up_restore_point_collection"></a> Clean up restore point collection
-After removing the lock, the restore points have to be cleaned up.
+After you remove the lock, the restore points have to be cleaned up.
If you delete the Resource Group of the VM, or the VM itself, the instant restore snapshots of managed disks remain active and expire according to the retention set. To delete the instant restore snapshots (if you don't need them anymore) that are stored in the Restore Point Collection, clean up the restore point collection according to the steps given below.
To manually clear the restore points collection, which isn't cleared because of
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. On the **Hub** menu, select **All resources**, select the Resource group with the following format AzureBackupRG_`<Geo>`_`<number>` where your VM is located.
- ![Select the resource group](./media/backup-azure-arm-vms-prepare/resource-group.png)
+ :::image type="content" source="./media/backup-azure-arm-vms-prepare/resource-group.png" alt-text="Screenshot shows how to select the resource group." lightbox="./media/backup-azure-arm-vms-prepare/resource-group.png":::
3. Select Resource group, the **Overview** pane is displayed. 4. Select **Show hidden types** option to display all the hidden resources. Select the restore point collections with the following format AzureBackupRG_`<VMName>`_`<number>`.
- ![Select the restore point collection](./media/backup-azure-arm-vms-prepare/restore-point-collection.png)
+ :::image type="content" source="./media/backup-azure-arm-vms-prepare/restore-point-collection.png" alt-text="Screenshot shows how to select the restore point collection." lightbox="./media/backup-azure-arm-vms-prepare/restore-point-collection.png":::
5. Select **Delete** to clean the restore point collection. 6. Retry the backup operation again.
backup Restore Afs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-afs.md
Title: Restore Azure File shares description: Learn how to use the Azure portal to restore an entire file share or specific files from a restore point created by Azure Backup. Previously updated : 03/04/2024 Last updated : 04/05/2024
You can also monitor restore progress from the Recovery Services vault:
>[!NOTE] >- Folders will be restored with original permissions if there is atleast one file present in them. >- Trailing dots in any directory path can lead to failures in the restore.
+>- Restore of a file or folder with length *>2 KB* or with characters `xFFFF` or `xFFFE` isn't supported from snapshots.
+
## Next steps
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion doesn't move or store customer data out of the region it's deploye
Some regions support the ability to deploy Azure Bastion in an availability zone (or multiple, for zone redundancy). To deploy zonally, you can select the availability zones you want to deploy under instance details when you deploy Bastion using manually specified settings. You can't change zonal availability after Bastion is deployed. If you aren't able to select a zone, you might have selected an Azure region that doesn't yet support availability zones.
-For more information about availability zones, see [Availability Zones](https://learn.microsoft.com/azure/reliability/availability-zones-overview?tabs=azure-cli).
+For more information about availability zones, see [Availability Zones](../reliability/availability-zones-overview.md?tabs=azure-cli).
### <a name="vwan"></a>Does Azure Bastion support Virtual WAN?
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
description: Learn how copy and paste to and from a Windows VM using Bastion.
Previously updated : 10/31/2023 Last updated : 04/04/2024 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
By default, Azure Bastion is automatically enabled to allow copy and paste for a
## <a name="to"></a> Copy and paste
-For browsers that support the advanced Clipboard API access, you can copy and paste text between your local device and the remote session in the same way you copy and paste between applications on your local device. For other browsers, you can use the Bastion clipboard access tool palette.
+For browsers that support the advanced Clipboard API access, you can copy and paste text between your local device and the remote session in the same way you copy and paste between applications on your local device. For other browsers, you can use the Bastion clipboard access tool palette. Note that copy and paste isn't supported for passwords.
> [!NOTE] > Only text copy/paste is currently supported.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
This section helps you deploy Bastion to your virtual network. After Bastion is
* **Region**: The Azure public region in which the resource will be created. Choose the region where your virtual network resides.
- * **Availability zone**: Select the zone(s) from the dropdown, if desired. Only certain regions are supported. For more information, see the [What are availability zones?](https://learn.microsoft.com/azure/reliability/availability-zones-overview?tabs=azure-cli) article.
+ * **Availability zone**: Select the zone(s) from the dropdown, if desired. Only certain regions are supported. For more information, see the [What are availability zones?](../reliability/availability-zones-overview.md?tabs=azure-cli) article.
* **Tier**: The SKU. For this tutorial, select **Standard**. For information about the features available for each SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
batch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/accounts.md
Title: Batch accounts and Azure Storage accounts description: Learn about Azure Batch accounts and how they're used from a development standpoint. Previously updated : 06/01/2023 Last updated : 04/04/2024 # Batch accounts and Azure Storage accounts
An Azure Batch account is a uniquely identified entity within the Batch service.
## Batch accounts
-All processing and resources are associated with a Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name, the URL of the account, and either an access key or a Microsoft Entra token.
+All processing and resources are associated with a Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name and the account URL. Additionally, it can use either an access key or a Microsoft Entra token.
You can run multiple Batch workloads in a single Batch account. You can also distribute your workloads among Batch accounts that are in the same subscription but located in different Azure regions.
For more information about storage accounts, see [Azure storage account overview
You can associate a storage account with your Batch account when you create the Batch account, or later. Consider your cost and performance requirements when choosing a storage account. For example, the GPv2 and blob storage account options support greater [capacity and scalability limits](https://azure.microsoft.com/blog/announcing-larger-higher-scale-storage-accounts/) compared with GPv1. (Contact Azure Support to request an increase in a storage limit.) These account options can improve the performance of Batch solutions that contain a large number of parallel tasks that read from or write to the storage account.
-When a storage account is linked to a Batch account, it's considered to be the *autostorage account*. An autostorage account is required if you plan to use the [application packages](batch-application-packages.md) capability, as it's used to store the application package .zip files. It can also be used for [task resource files](resource-files.md#storage-container-name-autostorage). Linking Batch accounts to autostorage can avoid the need for shared access signature (SAS) URLs to access the resource files.
+When a storage account is linked to a Batch account, it becomes the *autostorage account*. An autostorage account is necessary if you intend to use the [application packages](batch-application-packages.md) capability, as it stores the application package .zip files. It can also be used for [task resource files](resource-files.md#storage-container-name-autostorage). Linking Batch accounts to autostorage can avoid the need for shared access signature (SAS) URLs to access the resource files.
+
+> [!NOTE]
+> Batch nodes automatically unzip application package .zip files when they are pulled down from a linked storage account. This can cause the compute node local storage to fill up. For more information, see [Manage Batch application package](/cli/azure/batch/application/package).
## Next steps
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md
Title: Create a Batch account in the Azure portal description: Learn how to use the Azure portal to create and manage an Azure Batch account for running large-scale parallel workloads in the cloud. Previously updated : 07/18/2023- Last updated : 04/04/2024+ # Create a Batch account in the Azure portal
Select **Add**, then ensure that the **Azure Virtual Machines for deployment** a
:::image type="content" source="media/batch-account-create-portal/key-vault-access-policy.png" alt-text="Screenshot of the Access policy screen."::: -->
+> [!NOTE]
+> Currently, the Batch account name supports only access policies. When creating a Batch account, ensure that the key vault uses the associated access policy instead of the EntraID RBAC permissions. For more information on how to add an access policy to your Azure key vault instance, see [Configure your Azure Key Vault instance](batch-customer-managed-key.md).
### Configure subscription quotas
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
All user interactions with Chaos Studio happen through Azure Resource Manager. I
* [Learn how to limit AKS network access to a set of IP ranges here](../aks/api-server-authorized-ip-ranges.md). You can obtain Chaos Studio's IP ranges by querying the `ChaosStudio` [service tag with the Service Tag Discovery API or downloadable JSON files](../virtual-network/service-tags-overview.md). * Currently, Chaos Studio can't execute Chaos Mesh faults if the AKS cluster has [local accounts disabled](../aks/manage-local-accounts-managed-azure-ad.md). * **Agent-based faults**: To use agent-based faults, the agent needs access to the Chaos Studio agent service. A VM or virtual machine scale set must have outbound access to the agent service endpoint for the agent to connect successfully. The agent service endpoint is `https://acs-prod-<region>.chaosagent.trafficmanager.net`. You must replace the `<region>` placeholder with the region where your VM is deployed. An example is `https://acs-prod-eastus.chaosagent.trafficmanager.net` for a VM in East US.-
-Chaos Studio doesn't support Azure Private Link for agent-based scenarios.
+* **Agent-based private networking**: The Chaos Studio agent now supports private networking. Please see [Private networking for Chaos Agent](chaos-studio-private-link-agent-service.md).
## Service tags A [service tag](../virtual-network/service-tags-overview.md) is a group of IP address prefixes that can be assigned to inbound and outbound rules for network security groups. It automatically handles updates to the group of IP address prefixes without any intervention.
communication-services Closed Captions Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/closed-captions-logs.md
+
+ Title: Azure Communication Services Closed Captions logs
+
+description: Learn about logging for Azure Communication Services Closed captions.
+++ Last updated : 02/06/2024+++++
+# Azure Communication Services Closed Captions logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
+
+The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
+
+## Usage log schema
+
+| Property | Description |
+| | |
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| OperationName | The operation associated with log record. ClosedCaptionsSummary |
+| Type | The log category of the event. Logs with the same log category and resource type have the same property fields. ACSCallClosedCaptionsSummary |
+| Level | The severity level of the operation. Informational |
+| CorrelationId | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| ResourceId | The ID of Azure ACS resource to which a call with closed captions belongs |
+| ResultType | The status of the operation. |
+| SpeechRecognitionSessionId | The ID given to the closed captions this log refers to. |
+| SpokenLanguage | The spoken language of the closed captions. |
+| EndReason | The reason why the closed captions ended. |
+| CancelReason | The reason why the closed captions cancelled. |
+| StartTime | The time that the closed captions started. |
+| Duration | Duration of the closed captions in seconds. |
+
+Here's an example of a closed caption summary log:
+
+```json
+{
+ "TimeGenerated": "2023-11-14T23:18:26.4332392Z",
+ "OperationName": "ClosedCaptionsSummary",
+ "Category": "ACSCallClosedCaptionsSummary",
+ "Level": "Informational",
+ "CorrelationId": "336a0049-d98f-48ca-8b21-d39244c34486",
+ "ResourceId": "d2241234-bbbb-4321-b789-cfff3f4a6666",
+ "ResultType": "Succeeded",
+ "SpeechRecognitionSessionId": "eyJQbGF0Zm9ybUVuZHBvaW50SWQiOiI0MDFmNmUwMC01MWQyLTQ0YjAtODAyZi03N2RlNTA2YTI3NGYiLCJffffffXJjZVNwZWNpZmljSWQiOiIzOTc0NmE1Ny1lNzBkLTRhMTctYTI2Yi1hM2MzZTEwNTk0Mwwwww",
+ "SpokenLanguage": "cn-zh",
+ "EndReason": "Stopped",
+ "CancelReason": "",
+ "StartTime": "2023-11-14T03:04:05.123Z",
+ "Duration": "666.66"
+}
+```
communication-services Email Optout Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-optout-management.md
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include-document.md)]
-This article provides the Email delivery best practices and how to use the Azure Communication Services Email suppression list feature that allows customers to manage opt-out capabilities for email communications. It also provides information on the features that are important for emails opt out management that helps you improve email complaint management, promote better email practices, and increase your email delivery success, boosting the likelihood of getting to recipients' inboxes efficiently.
+This article provides Email delivery best practices and describes how to use the Azure Communication Services Email suppression list. This feature enables customers to manage opt-out capabilities for email communications. It also provides information about the features that are important for emails opt out management. Use these features to improve email compliance management, promote better email practices, increase your email delivery success, and boost the likelihood of reaching recipient inboxes.
## Opt out or unsubscribe management: Ensuring transparent sender reputation
-It's important to know how interested your customers are in your email communication and to respect their opt-out or unsubscribe requests when they decide not to get emails from you. This helps you keep a good sender reputation. Whether you have a manual or automated process in place for handling unsubscribes, it's important to provide an "unsubscribe" link in the email payload you send. When recipients decide not to receive further emails, they can click on the 'unsubscribe' link and remove their email address from your mailing list.
+It's important to know how interested your customers are in your email communication and to respect their opt-out or unsubscribe requests when they decide not to get emails from you. This helps you keep a good sender reputation. Whether you have a manual or automated process in place for handling unsubscribes, it's important to provide an "unsubscribe" link in the email payload you send. When recipients decide not to receive further emails, they can click on the 'unsubscribe' link to remove their email address from your mailing list.
-The functionality of the links and instructions in the email is vital; they must be working correctly and promptly notify the application mailing list to remove the contact from the appropriate list or lists. A proper unsubscribe mechanism should be explicit and transparent from the subscriber's perspective, ensuring they know precisely which messages they're unsubscribing from. Ideally, they should be offered a preferences center that gives them the option to unsubscribe in cases where they're subscribed to multiple lists within your organization. This process prevents accidental unsubscribes and allows users to manage their opt-in and opt-out preferences effectively through the unsubscribe management process.
+The function of the links and instructions in the email is vital. They must be working correctly and promptly notify the application mailing list to remove the contact from the appropriate list or lists. A proper unsubscribe mechanism should be explicit and transparent from the subscriber's perspective. This helps ensure that they know precisely which messages they're unsubscribing from. Ideally, they should be offered a preferences center that gives them the option to unsubscribe in cases where they're subscribed to multiple lists within your organization. This process prevents accidental unsubscribes and enables users to manage their opt-in and opt-out preferences effectively through the unsubscribe management process.
## Managing emails opt out preferences with suppression list in Azure Communication Service Email
-Azure Communication Service Email offers a powerful platform with a centralized managed unsubscribe list with opt out preferences saved to our data store. This feature helps the developers to meet guidelines of email providers, requiring one-click list-unsubscribe implementation in the emails sent from our platform. To proactively identify and avoid significant delivery problems, suppression list features, including but not limited to:
+
+Azure Communication Service Email offers a powerful platform with a centralized managed unsubscribe list with opt-out preferences saved to our data store. This feature helps developers meet the guidelines of email providers, requiring one-click list-unsubscribe implementation in the emails sent from our platform. To proactively identify and avoid significant delivery problems, suppression list features include but aren't limited to:
* Offers domain-level, customer managed lists that provide opt-out capabilities. * Provides Azure resources that allow for Create, Read, Update, and Delete (CRUD) operations via Azure portal, Management SDKs, or REST APIs.
Azure Communication Service Email offers a powerful platform with a centralized
* Adds Email addresses programmatically for an easy opt-out process for unsubscribing. ### Benefits of opt out or unsubscribe management+ Using a suppression list in Azure Communication Services offers several benefits:
-* Compliance and Legal Considerations: This feature is crucial for adhering to legal responsibilities defined in local government legislation like the CAN-SPAM Act in the United States. It ensures that customers can easily manage opt-outs and maintain compliance with these regulations.
-* Better Sender Reputation: When emails aren't sent to users who have chosen to opt out, it helps protect the senderΓÇÖs reputation and lowers the chance of being blocked by email providers.
-* Improved User Experience: It respects the preferences of users who don't wish to receive communications, leading to a better user experience and potentially higher engagement rates with recipients who choose to receive emails.
-* Operational Efficiency: Suppression lists can be managed programmatically, allowing for efficient handling of large numbers of opt-out requests without manual intervention.
-* Cost-Effectiveness: By not sending emails to recipients who opted out, it reduces the volume of sent emails, which can lower operational costs associated with email delivery.
-* Data-Driven Decisions: The suppression list feature provides insights into the number of opt-outs, which can be valuable data for making informed decisions about email campaign strategies.
+* Compliance and Legal Considerations: Use opt-out links to meet legal responsibilities defined in local government legislation like the CAN-SPAM Act in the United States. It ensures that customers can easily manage opt-outs and maintain compliance with these regulations.
+* Better Sender Reputation: When emails aren't sent to users who opted out, it helps protect the senderΓÇÖs reputation and lower the chance of being blocked by email providers.
+* Improved User Experience: It respects the preferences of users who don't wish to receive communications. Collecting and storing email preferences lead to a better user experience and potentially higher engagement rates with recipients who choose to receive emails.
+* Operational Efficiency: Suppression lists can be managed programmatically. Use this feature to efficiently handle large numbers of opt-out requests without manual intervention.
+* Cost-Effectiveness: By not sending emails to recipients who opted out, it reduces the volume of sent emails. This can lower operational costs associated with email delivery.
+* Data-Driven Decisions: The suppression list feature provides insights into the number of opt-outs. Use this valuable data to make informed decisions about email campaign strategies.
-These benefits contribute to a more efficient, compliant, and user-friendly email communication system when using Azure Communication Services. To enable email logs and monitor your email delivery, follow the steps outlined in [Azure Communication Services email logs Communication Service in Azure Communication Service](../../concepts/analytics/logs/email-logs.md).
+These benefits contribute to a more efficient, compliant, and user-friendly email communication system using Azure Communication Services. To enable email logs and monitor your email delivery, follow the steps outlined in [Azure Communication Services email logs Communication Service in Azure Communication Service](../../concepts/analytics/logs/email-logs.md).
## Next steps
+* [Get started with creating and managing Domain level Suppression List in Email Azure Communication Services](../../quickstarts/email/manage-suppression-list-management-sdks.md)
+
The following documents may be interesting to you: - Familiarize yourself with the [Email client library](../email/sdk-features.md)
communication-services Migrate To Azure Communication Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/migrate-to-azure-communication-services.md
+
+ Title: Migrate to Azure Communication Services Calling SDK
+
+description: Migrate a calling product from Twilio Video to Azure Communication Services Calling SDK.
++++ Last updated : 04/04/2024+++++
+# Migrate to Azure Communication Services Calling SDK
+
+Migrate now to a market leading CPaaS platform with regular updates and long-term support. The [Azure Communication Services Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) provides features and functions that improve upon the sunsetting Twilio Programmable Video.
+
+Both products are cloud-based platforms that enable developers to add voice and video calling features to their web applications. When you migrate to Azure Communication Services, the calling SDK has key advantages that may affect your choice of platform and require minimal changes to your existing code.
+
+In this article, we describe the main features and functions of the Azure Communication Services, and link to a document comparing both platforms. We also provide links to instructions for migrating an existing Twilio Programmable Video implementation to Azure Communication Services Calling SDK.
+
+## What is Azure Communication Services?
+
+Azure Communication Services are cloud-based APIs and SDKs that you can use to seamlessly integrate communication tools into your applications. Improve your customersΓÇÖ communication experience using our multichannel communication APIs to add voice, video, chat, text messaging/SMS, email, and more.
+
+## Why migrate from Twilio Video to Azure Communication Services?
+
+Expect more from your communication services platform:
+
+- **Ease of migration** ΓÇô Use existing APIs and SDKs including a UI library to quickly migrate from Twilio Programmable Video to Microsoft's Calling SDK.
+
+- **Feature parity** ΓÇô The Calling SDK provides features and performance that meet or exceed Twilio Video.
+
+- **Multichannel communication** ΓÇô Choose from enterprise-level communication tools including voice, video, chat, SMS, and email.
+
+- **Maintenance and support** ΓÇô Microsoft delivers stability and long-term commitment with active support and regular software updates.
+
+## Azure Communication Services and Microsoft are your video platform of the future
+
+Azure Communication Services Calling SDK is just one part of the Azure ecosystem. You can bundle the Calling SDK with many other Azure services to speed enterprise adoption of your Communications Platform as a Service (CPaaS) solution. Key points of why Microsoft is optimal solution:
+
+- **Teams integration** ΓÇô Seamlessly integrate with Microsoft Teams to extend cloud-based meeting and messaging.
+
+- **Long-term guidance and support** ΓÇô Microsoft continues to provide application support, updates, and innovation.
+
+- **Artificial Intelligence (AI)** ΓÇô Microsoft invests heavily in AI research and its practical applications. We're actively applying AI to speed up technology adoption and ultimately improve the end user experience.
+
+- **Leverage the Microsoft ecosystem** ΓÇô Azure Communication Services, the Calling SDK, the Teams platform, AI research and development, the list goes on. Microsoft invests heavily in data centers, cloud computing, AI, and dozens of business applications.
+
+- **Developer-centric approach** ΓÇô Microsoft has a long history of investing in developer tools and technologies including GitHub, Visual Studio, Visual Studio Code, Copilot, support for an active developer community, and more.
+
+## Video conference feature comparison
+
+The Azure Communication Services Calling SDK has feature parity with TwilioΓÇÖs Video platform, with several additional features to further improve your communications platform. For a detailed feature map, see [Calling SDK overview > Detailed capabilities](./voice-video-calling/calling-sdk-features.md#detailed-capabilities).
+
+## Understand call types in Azure Communication Services
+
+Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. For more information, see [Voice and video concepts](../concepts/voice-video-calling/about-call-types.md).
+
+- **Voice Over IP (VoIP)** - When a user of your application calls another over an internet or data connection. Both signaling and media traffic are routed over the internet.
+- **Public Switched Telephone Network (PSTN)** - When your users call a traditional telephone number, calls are facilitated via PSTN voice calling. To make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users.
+- **One-to-One Calls** - When one of your users connects with another through our SDKs. You can establish the call via either VoIP or PSTN.
+- **Group Calls** - When three or more participants connect in a single call. Any combination of VoIP and PSTN-connected users can be on a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot.
+- **Rooms Call** - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, see the [Rooms overview](../concepts/rooms/room-concept.md).
++
+## Key features available in Azure Communication Services Calling SDK
+
+- **Addressing** - Azure Communication Services provides [identities](../concepts/identity-model.md) for authenticating and addressing communication endpoints. These identities are used within Calling APIs, providing your customers with a clear view of who is connected to a call (the roster).
+- **Encryption** - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way.
+- **Device Management and Media enablement** - The SDK manages audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing.
+- **PSTN calling** - You can use the SDK to initiate voice calling using the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically.
+- **Teams Meetings** ΓÇô Your customers can use Azure Communication Services to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and video calls.
+- **Notifications** - Azure Communication Services provides APIs to notify clients of incoming calls. Notifications enable your application to listen for events (such as incoming calls) even when your application isn't running in the foreground.
+- **User Facing Diagnostics** - Azure Communication Services uses [events](../concepts/voice-video-calling/user-facing-diagnostics.md) to provide insights into underlying issues that might affect call quality. You can subscribe your application to triggers such as weak network signals or muted microphones for proactive issue awareness.
+- **Media Quality Statistics** - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md). Metrics include call quality information, empowering developers to enhance communication experiences.
+- **Video Constraints** - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. The SDK supports different call situations for varied levels of video quality, so developers can adjust parameters like resolution and frame rate.
+
+## Next steps
+
+[Migrate from Twilio Video to Azure Communication Services.](../tutorials/migrating-to-azure-communication-services-calling.md)
+
+For a feature map, see [Calling SDK overview > Detailed capabilities](./voice-video-calling/calling-sdk-features.md#detailed-capabilities)
communication-services Teams Interop Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md
In this quickstart, we use the Azure Communication Services Call Automation APIs
## Prerequisites - An Azure account with an active subscription.-- A Microsoft Teams phone license and a Teams tenant with administrative privileges. Teams phone license is a must in order to use this feature, learn more about Teams licenses [here](https://www.microsoft.com/en-us/microsoft-teams/compare-microsoft-teams-bundle-options). Administrative privileges are required to authorize Communication Services resource to call Teams users, explained later in Step 1.
+- A Microsoft Teams phone license and a Teams tenant with administrative privileges. Teams phone license is a must in order to use this feature, learn more about Teams licenses [here](https://www.microsoft.com/microsoft-teams/compare-microsoft-teams-bundle-options). Administrative privileges are required to authorize Communication Services resource to call Teams users, explained later in Step 1.
- A deployed [Communication Service resource](../../quickstarts/create-communication-resource.md) and valid connection string found by selecting Keys in left side menu on Azure portal. - [Acquire a PSTN phone number from the Communication Service resource](../../quickstarts/telephony/get-phone-number.md). Note the phone number you acquired to use in this quickstart. - An Azure Event Grid subscription to receive the `IncomingCall` event.
If you want to clean up and remove a Communication Services subscription, you ca
- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features. - Learn more about capabilities of [Teams Interoperability support with Azure Communication Services Call Automation](../../concepts/call-automation/call-automation-teams-interop.md) - Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.-- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Audio Conferencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/audio-conferencing.md
# Microsoft Teams Meeting Audio Conferencing
-In this article, you learn how to use Azure Communication Services Calling SDK to retrieve Microsoft Teams Meeting audio conferencing details. This functionality allows users who are already connected to a Microsoft Teams Meeting to be able to get the conference ID and dial in phone number associated with the meeting. At present, Teams audio conferencing feature returns a conference ID and only one dial-in toll or toll-free phone number depending on the priority assigned. In the future, Teams audio conferencing feature will return a collection of all toll and toll-free numbers, giving users control on what Teams meeting dial-in details to use
+In this article, you learn how to use Azure Communication Services Calling SDK to retrieve Microsoft Teams Meeting audio conferencing details. This functionality allows users who are already connected to a Microsoft Teams Meeting to be able to get the conference ID and dial in phone number associated with the meeting. Teams audio conferencing feature returns a collection of all toll and toll-free numbers, with concomitant country names and city names, giving users control on what Teams meeting dial-in details to use.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
communication-services Manage Suppression List Management Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/manage-suppression-list-management-sdks.md
+
+ Title: Manage domain suppression lists in Azure Communication Services using the management client libraries
+
+description: Learn about managing domain suppression ists in Azure Communication Services using the management client libraries
++++ Last updated : 11/21/2023+++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: Manage domain suppression lists in Azure Communication Services using the management client libraries
+
+This quick start covers the process for managing domain suppression lists in Azure Communication Services using the Azure Communication Services management client libraries.
++++
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
You can see that the status of the Communication Services Teams.ManageCalls and
If you run into the issue "The app is trying to access a service '1fd5118e-2576-4263-8130-9503064c837a'(Azure Communication Services) that your organization '{GUID}' lacks a service principal for. Contact your IT Admin to review the configuration of your service subscriptions or consent to the application to create the required service principal." your Microsoft Entra tenant lacks a service principal for the Azure Communication Services application. To fix this issue, use PowerShell as a Microsoft Entra administrator to connect to your tenant. Replace `Tenant_ID` with an ID of your Microsoft Entra tenancy.
-You will require **Application.ReadWrite.All** as shown bellow
-![image](https://github.com/brpiment/azure-docs-pr/assets/67699415/c53459fa-d64a-4ef2-8737-b75130fbc398)
+You will require **Application.ReadWrite.All** as shown below.
+
+[![Screenshot showing Application Read Write All.](./media/graph-permissions.png)](./media/graph-permissions.png#lightbox)
```script
Learn about the following concepts:
- [Use cases for communication as a Teams user](../concepts/interop/custom-teams-endpoint-use-cases.md) - [Azure Communication Services support Teams identities](../concepts/teams-endpoint.md) - [Teams interoperability](../concepts/teams-interop.md)-- [Single-tenant and multi-tenant authentication for Teams users](../concepts/interop/custom-teams-endpoint-authentication-overview.md)
+- [Single-tenant and multitenant authentication for Teams users](../concepts/interop/custom-teams-endpoint-authentication-overview.md)
- [Create and manage Communication access tokens for Teams users in a single-page application (SPA)](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-spa)
communication-services How To Collect Browser Verbose Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/how-to-collect-browser-verbose-log.md
+
+ Title: References - How to collect verbose log from browsers
+
+description: Learn how to collect verbose log from browsers.
++++ Last updated : 02/24/2024+++++
+# How to collect verbose log from browsers
+When an issue originates within the underlying layer, collecting verbose logs in addition to web logs can provide valuable information.
+
+To collect the verbose log from the browser, initiate a web browser session with specific command line arguments. You open your video application within the browser and execute the scenario you're debugging.
+Once the scenario is executed, you can close the browser.
+During log collection, ensure to keep only the necessary tabs open in the browser.
+
+To collect the verbose log of the Edge browser, open a command line window and execute:
+
+`"C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe" --user-data-dir=C:\edge-debug --enable-logging --v=0 --vmodule=*/webrtc/*=2,*/libjingle/*=2,*media*=4 --no-sandbox`
+
+For Chrome, replace the executable path in the command with `C:\Program Files\Google\Chrome\Application\chrome.exe`.
+
+DonΓÇÖt omit the `--user-data-dir` argument. This argument is used to specify where the logs are saved.
+
+This command enables verbose logging and saves the log to chrome\_debug.log.
+It's important to have only the necessary pages open in the Edge browser, such as `edge://webrtc-internals` and the application web page.
+Keeping only necessary pages open ensure that logs from different web applications don't mix in the same log file.
+
+Log file is located at: `C:\edge-debug\chrome_debug.log`
+
+The verbose log is flushed each time the browser is opened with the specified command line.
+Therefore, after closing the browser, you should copy the log and check its file size and modification time to confirm that it contains the verbose log.
communication-services How To Collect Client Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/how-to-collect-client-logs.md
+
+ Title: References - How to collect client logs
+
+description: Learn how to collect client logs.
++++ Last updated : 02/24/2024+++++
+# How to collect client logs
+The client logs can help when we want to get more details while debugging an issue.
+To collect client logs, you can use [@azure/logger](https://www.npmjs.com/package/@azure/logger), which is used by WebJS calling SDK internally.
+
+```typescript
+import { setLogLevel, createClientLogger, AzureLogger } from '@azure/logger';
+setLogLevel('info');
+let logger = createClientLogger('ACS');
+const callClient = new CallClient({ logger });
+// app logging
+logger.info('....');
+
+```
+
+[@azure/logger](https://www.npmjs.com/package/@azure/logger) supports four different log levels:
+
+* verbose
+* info
+* warning
+* error
+
+For debugging purposes, `info` level logging is sufficient in most cases.
+
+In the browser environment, [@azure/logger](https://www.npmjs.com/package/@azure/logger) outputs logs to the console by default.
+You can redirect logs by overriding `AzureLogger.log` method. For more information, see [@azure/logger](/javascript/api/overview/azure/logger-readme).
+
+Your app might keep logs in memory if it has a \'download log file\' feature.
+If that is the case, you have to set a limit on the log size.
+Not setting a limit might cause memory issues on long running calls.
+
+Additionally, if you send logs to a remote service, consider mechanisms such as compression and scheduling.
+If the client has insufficient bandwidth, sending a large amount of log data in a short period of time can affect call quality.
communication-services How To Collect Diagnostic Audio Recordings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/how-to-collect-diagnostic-audio-recordings.md
+
+ Title: References - How to collect diagnostic audio recordings
+
+description: Learn how to collect diagnostic audio recordings.
++++ Last updated : 02/24/2024+++++
+# How to collect diagnostic audio recordings
+To debug some issue, you may need audio recordings, especially when investigating audio quality problems, such as distorted audio and echo issues.
+
+To collect diagnostic audio recordings, open the chrome://webrtc-internals(Chrome) or edge://webrtc-internals(Edge) page.
+
+When you click *Enable diagnostic audio recordings*, the browser prompts a dialog asking for the download file location.
++
+After you finish an ACS call, you should be able to see files saved in the folder you choose.
++
+`*.output.N.wav` is the audio output sent to the speaker.
+
+`*.input.M.wav` is the audio input captured from the microphone.
+
+`*.aecdump` contains the necessary wav files for debugging audio after processed by the audio processing module in browsers.
communication-services How To Collect Windows Audio Event Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/how-to-collect-windows-audio-event-log.md
+
+ Title: References - How to collect Windows audio event log
+
+description: Learn how to collect Windows audio event log.
++++ Last updated : 02/24/2024+++++
+# How to collect Windows audio event logs
+The Windows audio event log provides information on the audio device state around the time when the issue we're investigating occurred.
+
+To collect the audio event log:
+* open Windows Event Viewer
+* browse the logs in *Application and Services Logs > Microsoft > Windows > Audio > Operational*
+* you can either
+ * select logs within time range, right click and choose *Save Selected Events*.
+ * right click on Operational, and choose *Save All Events As*.
+
communication-services Camera Freeze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/camera-freeze.md
+
+ Title: Understanding cameraFreeze UFD - User Facing Diagnostics
+
+description: Overview and details reference for understanding cameraFreeze UFD.
++++ Last updated : 03/27/2024+++++
+# cameraFreeze UFD
+A `cameraFreeze` UFD event with a `true` value occurs when the SDK detects that the input framerate goes down to zero, causing the video output to appear frozen or not changing.
+
+The underlying issue may suggest problems with the user's video camera, or in certain instances, the device may cease sending video frames.
+For example, on certain Android device models, you may see a `cameraFreeze` UFD event when the user locks the screen or puts the browser in the background.
+In this situation, the Android operating system stops sending video frames, and thus on the other end of the call a user may see a `cameraFreeze` UFD event.
+
+| cameraFreeze | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example code to catch a cameraFreeze UFD event
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'cameraFreeze') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The cameraFreeze UFD recovered, notify the user
+ }
+ }
+});
+```
+
+## How to mitigate or resolve
+Your calling application should subscribe to events from the User Facing Diagnostics.
+You should also consider displaying a message on your user interface to alert users of potential camera issues.
+The user can try to stop and start the video again, switch to other cameras or switch calling devices to resolve the issue.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Camera Permission Denied https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/camera-permission-denied.md
+
+ Title: Understanding cameraPermissionDenied UFD - User Facing Diagnostics
+
+description: Overview and details reference for understanding cameraPermissionDenied UFD.
++++ Last updated : 03/27/2024+++++
+# cameraPermissionDenied UFD
+The `cameraPermissionDenied` UFD event with a `true` value occurs when the SDK detects that the camera permission was denied either at browser layer or at Operating System level.
+
+| cameraPermissionDenied | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example code to catch a cameraPermissionDenided UFD event
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'cameraPermissionDenied') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The cameraPermissionDenied UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should invoke `DeviceManager.askDevicePermission` before the call starts to check whether the permission was granted or not.
+If the permission to use the camera is denied, the application should display a message on your user interface.
+Additionally, your application should acquire camera browser permission before listing the available camera devices.
+If there's no permission granted, the application is unable to get the detailed information of the camera devices on the user's system.
+
+The camera permission can also be revoked during a call, so your application should also subscribe to events from the User Facing Diagnostics events to display a message on the user interface.
+Users can then take steps to resolve the issue on their own, such as enabling the browser permission or checking whether they disabled the camera access at OS level.
+
+> [!NOTE]
+> Some browser platforms cache the permission results.
+
+If a user denied the permission at browser layer previously, invoking `askDevicePermission` API doesn't trigger the permission UI prompt, but it can know the permission was denied.
+Your application should show instructions and ask the user to reset or grant the browser camera permission manually.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Camera Start Failed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/camera-start-failed.md
+
+ Title: Understanding cameraStartFailed UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of cameraStartFailed UFD
++++ Last updated : 03/27/2024+++++
+# cameraStartFailed UFD
+The `cameraStartFailed` UFD event with a `true` value occurs when the SDK is unable to acquire the camera stream because the source is unavailable.
+This error typically happens when the specified video device is being used by another process.
+For example, the user may see this `cameraStartFailed` UFD event when they attempt to join a call with video on a browser such as Chrome while another Edge browser has been using the same camera.
+
+| cameraStartFailed | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'cameraStartFailed') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // cameraStartFailed UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+The `cameraStartFailed` UFD event is due to external reasons, so your application should subscribe to events from the User Facing Diagnostics and display a message on the UI to alert users of camera start failures. To resolve this issue, users can check if there are other processes using the same camera and close them if necessary.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Camera Start Timed Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/camera-start-timed-out.md
+
+ Title: Understanding cameraStartTimedOut UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of cameraStartTimedOut UFD
++++ Last updated : 03/27/2024+++++
+# cameraStartTimedOut UFD
+The `cameraStartTimedOut` UFD event with a `true` value occurs when the SDK is unable to acquire the camera stream because the promise returned by `getUserMedia` browser method doesn't resolve within a certain period of time.
+This issue can happen when the user starts a call with video enabled, but the browser displays a UI permission prompt and the user doesn't respond to it.
+
+| cameraStartTimedOut | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'cameraStartTimedOut') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The cameraStartTimedOut UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+The application should invoke `DeviceManager.askDevicePermission` before the call starts to check whether the permission was granted or not.
+Invoking `DeviceManager.askDevicePermission` also reduces the possibility that the user doesn't respond to the UI permission prompt after the call starts.
+
+If the timeout issue is caused by hardware problems, users can try selecting a different camera device when starting the video stream.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Camera Stopped Unexpectedly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/camera-stopped-unexpectedly.md
+
+ Title: Understanding cameraStoppedUnexpectedly UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of cameraStoppedUnexpectedly UFD.
++++ Last updated : 03/27/2024+++++
+# cameraStoppedUnexpectedly UFD
+The `cameraStoppedUnexpectedly` UFD event with a `true` value occurs when the SDK detects that the camera track was muted.
+
+Keep in mind that this event relates to the camera track's `mute` event triggered by an external source.
+The event can be triggered on mobile browsers when the browser goes to background.
+Additionally, in some browser implementations, the browser sends black frames when the video input track is muted.
+
+| cameraStoppedUnexpectedly | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'cameraStoppedUnexpectedly') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The cameraStoppedUnexpectedly UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display a message on the user interface to alert users of any camera state changes.
+This way ensures that users are aware of camera stopped issues and aren't surprised if other participants can't see the video.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Capturer Start Failed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/capturer-start-failed.md
+
+ Title: Understanding capturerStartFailed UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of capturerStartFailed UFD.
++++ Last updated : 03/27/2024+++++
+# capturerStartFailed UFD
+The `capturerStartFailed` UFD event with a `true` value occurs when the SDK is unable to acquire the screen sharing stream because the source is unavailable.
+This issue can happen when the underlying layer prevents the sharing of the selected source.
+
+| capturerStartFailed | Details |
+| -||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'capturerStartFailed') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The capturerStartFailed UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+The `capturerStartFailed` is due to external reasons, so your application should subscribe to events from the User Facing Diagnostics and display a message on your user interface to alert users of screen sharing failures.
+Users can then take steps to resolve the issue on their own, such as checking if there are other processes causing this issue.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Capturer Stopped Unexpectedly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/capturer-stopped-unexpectedly.md
+
+ Title: Understanding capturerStoppedUnexpectedly UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of capturerStoppedUnexpectedly UFD.
++++ Last updated : 03/26/2024+++++
+# capturerStoppedUnexpectedly UFD
+The `capturerStoppedUnexpectedly` UFD event with a `true` value occurs when the SDK detects that the screen sharing track was muted.
+This issue can happen due to external reasons and depends on the browser implementation.
+For example, if the user shares a window and minimize that window, the `capturerStoppedUnexpectedly` UFD event may fire.
+
+| capturerStoppedUnexpectedly | Details |
+| -||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'capturerStoppedUnexpectedly') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The capturerStoppedUnexpectedly UFD recovered, notify the user
+ }
+ }
+});
+```
+
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display a message on your user interface to alert users of screen sharing issues.
+Users can then take steps to resolve the issue on their own, such as checking whether they accidentally minimize the window being shared.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Microphone Mute Unexpectedly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/microphone-mute-unexpectedly.md
+
+ Title: Understanding microphoneMuteUnexpectedly UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of microphoneMuteUnexpectedly UFD
++++ Last updated : 03/27/2024+++++
+# microphoneMuteUnexpectedly UFD
+The `microphoneMuteUnexpectedly` UFD event with a `true` value occurs when the SDK detects that the microphone track was muted. Keep in mind, that the event is related to the `mute` event of the microphone track, when it's triggered by an external source rather than by the SDK mute API. The underlying layer triggers the event, such as the audio stack muting the audio input session. The hardware mute button of some headset models can also trigger the `microphoneMuteUnexpectedly` UFD. Additionally, some browser platforms, such as iOS Safari browser, may mute the microphone when certain interruptions occur, such as an incoming phone call.
+
+| microphoneMuteUnexpectedly | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'microphoneMuteUnexpectedly') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The microphoneMuteUnexpectedly UFD recovered, notify the user
+ }
+ }
+});
+```
+
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display an alert message to users of any microphone state changes. By doing so, users are aware of muted issues and aren't surprised if they found other participants can't hear their audio during a call.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Microphone Not Functioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/microphone-not-functioning.md
+
+ Title: Understanding microphoneNotFunctioning UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of microphoneNotFunctioning UFD
++++ Last updated : 03/27/2024+++++
+# microphoneNotFunctioning UFD
+The `microphoneNotFunctioning` UFD event with a `true` value occurs when the SDK detects that the microphone track was ended. The microphone track ending happens in many situations.
+For example, unplugging a microphone in use triggers the browser to end the microphone track. The SDK would then fire `microphoneNotFunctioning` UFD event.
+It can also occur when the user removes the microphone permission at browser or at OS level. The underlying layers, such as audio driver or media stack at OS level, may also end the session, causing the browser to end the microphone track.
+
+| microphoneNotFunctioning | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'microphoneNotFunctioning') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The microphoneNotFunctioning UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+The application should subscribe to events from the User Facing Diagnostics and display a message on the UI to alert users of any microphone issues.
+Users can then take steps to resolve the issue on their own.
+For example, they can unplug and plug in the headset device, or sometimes muting and unmuting the microphone can help as well.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Microphone Permission Denied https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/microphone-permission-denied.md
+
+ Title: Understanding microphonePermissionDenied UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of microphonePermissionDenied UFD.
++++ Last updated : 03/27/2024+++++
+# microphonePermissionDenied UFD
+The `microphonePermissionDenied` UFD event with a `true` value occurs when the SDK detects that the microphone permission was denied either at browser or OS level.
+
+| microphonePermissionDenied | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'microphonePermissionDenied') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The microphonePermissionDenied UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should invoke `DeviceManager.askDevicePermission` before a call starts to check whether the proper permissions were granted or not.
+If the permission is denied, your application should display a message in the user interface to alert about this situation.
+Additionally, your application should acquire browser permission before listing the available microphone devices.
+If there's no permission granted, your application is unable to get the detailed information of the microphone devices on the user's system.
+
+The permission can also be revoked during the call.
+Your application should also subscribe to events from the User Facing Diagnostics and display a message on the user interface to alert users of any permission issues.
+Users can resolve the issue on their own, by enabling the browser permission or checking whether they disabled the microphone access at OS level.
+
+> [!NOTE]
+> Some browser platforms cache the permission results.
+
+If a user denied the permission at browser layer previously, invoking `askDevicePermission` API doesn't trigger the permission UI prompt, but the method can know the permission was denied.
+Your application should show instructions and ask the user to reset or grant the browser microphone permission manually.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Network Receive Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/network-receive-quality.md
+
+ Title: Understanding networkReceiveQuality UFD - User Facing Diagnostics
+
+description: Overview and detiled reference of networkReceiveQuality UFD
++++ Last updated : 03/27/2024+++++
+# networkReceiveQuality UFD
+The `networkReceiveQuality` UFD event with a `Bad` value indicates the presence of network quality issues for incoming streams, as detected by the ACS Calling SDK.
+This event suggests that there may be problems with the network connection between the local endpoint and remote endpoint.
+When this UFD event fires with a`Bad` value, the user may experience degraded audio quality.
+
+| networkReceiveQualityUFD | Details |
+| -||
+| UFD type | NetworkDiagnostics |
+| value type | DiagnosticQuality |
+| possible values | Good, Poor, Bad |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).network.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'networkReceiveQuality') {
+ if (diagnosticInfo.value === DiagnosticQuality.Bad) {
+ // network receive quality bad, show a warning message on UI
+ } else if (diagnosticInfo.value === DiagnosticQuality.Poor) {
+ // network receive quality poor, notify the user
+ } else if (diagnosticInfo.value === DiagnosticQuality.Good) {
+ // network receive quality recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+From the perspective of the ACS Calling SDK, network issues are considered external problems.
+To solve network issues, you need to understand the network topology and identify the nodes that are causing the problem.
+These parts involve network infrastructure, which is outside the scope of the ACS Calling SDK.
+
+Your application should subscribe to events from the User Facing Diagnostics.
+Display a message on your user interface that informs users of network quality issues and potential audio quality degradation.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Network Reconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/network-reconnect.md
+
+ Title: Understanding networkReconnect UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of networkReconnect UFD
++++ Last updated : 03/27/2024+++++
+# networkReconnect UFD
+The `networkReconnect` UFD event with a `Bad` value occurs when the Interactive Connectivity Establishment (ICE) transport state on the connection is `failed`.
+This event indicates that there may be network issues between the two endpoints, such as packet loss or firewall issues.
+The connection failure is detected by the ICE consent freshness mechanism implemented in the browser.
+
+When an endpoint doesn't receive a reply after a certain period, the ICE transport state will transition to `disconnected`.
+If there's still no response received, the state then becomes `failed`.
+
+Since the endpoint didn't receive a reply for a period of time, it's possible that incoming packets weren't received or outgoing packets didn't reach to the other users.
+This situation may result in the user not hearing or seeing the other party.
+
+| networkReconnect UFD | Details |
+| ||
+| UFD type | NetworkDiagnostics |
+| value type | DiagnosticQuality |
+| possible values | Good, Bad |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).network.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'networkReconnect') {
+ if (diagnosticInfo.value === DiagnosticQuality.Bad) {
+ // media transport disconnected, show a warning message on UI
+ } else if (diagnosticInfo.value === DiagnosticQuality.Good) {
+ // media transport recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+From the perspective of the ACS Calling SDK, network issues are considered external problems.
+To solve network issues, you need to understand the network topology and identify the nodes that are causing the problem.
+These parts involve network infrastructure, which is outside the scope of the ACS Calling SDK.
+
+Internally, the ACS Calling SDK will trigger reconnection after a `networkReconnect` UFD event with a `Bad` value is fired. If the connection recovers, `networkReconnect` UFD event with a `Good` value is fired.
+
+Your application should subscribe to events from the User Facing Diagnostics.
+Display a message on your user interface that informs users of network connection issues and potential audio loss.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Network Relays Not Reachable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/network-relays-not-reachable.md
+
+ Title: Understanding networkRelaysNotReachable UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of networkRelaysNotReachable UFD
++++ Last updated : 03/27/2024+++++
+# networkRelaysNotReachable UFD
+The `networkRelaysNotReachable` UFD event with a `true` value occurs when the media connection fails to establish and no relay candidates are available. This issue usually happens when the firewall policy blocks connections between the local client and relay servers.
+
+When users see the `networkRelaysNotReachable` UFD event, it also indicates that the local client isn't able to make a direct connection to the remote endpoint.
+
+| networkRelaysNotReachable UFD | Details |
+| ||
+| UFD type | NetworkDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).network.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'networkRelaysNotReachable') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The networkRelaysNotReachable UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics.
+Display a message on your user interface and inform users of network setup issues.
+
+Users should follow the *Firewall Configuration* guideline mentioned in the [Network recommendations](../../../../../concepts/voice-video-calling/network-requirements.md) document. It's also recommended that the user also checks their Network address translation (NAT) settings or whether their firewall policy blocks User Datagram Protocol (UDP) packets.
+
+If the organization policy doesn't allow users to connect to Microsoft TURN relay servers, custom TURN servers can be configured to avoid connection failures. For more information, see [Force calling traffic to be proxied across your own server](../../../../../tutorials/proxy-calling-support-tutorial.md) tutorial.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Network Send Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/network-send-quality.md
+
+ Title: Understanding networkSendQuality UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of networkSendQuality UFD
++++ Last updated : 03/27/2024+++++
+# networkSendQuality UFD
+The `networkSendQuality` UFD event with a `Bad` value indicates that there are network quality issues for outgoing streams, such as packet loss, as detected by the ACS Calling SDK.
+This event suggests that there may be problems with the network quality issues between the local endpoint and remote endpoint.
++
+| networkSendQualityUFD | Details |
+| -||
+| UFD type | NetworkDiagnostics |
+| value type | DiagnosticQuality |
+| possible values | Good, Bad |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).network.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'networkSendQuality') {
+ if (diagnosticInfo.value === DiagnosticQuality.Bad) {
+ // network send quality bad, show a warning message on UI
+ } else if (diagnosticInfo.value === DiagnosticQuality.Good) {
+ // network send quality recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+From the perspective of the ACS Calling SDK, network issues are considered external problems.
+To solve network issues, it's typically necessary to have an understanding of the network topology and the nodes that are causing the problem.
+These parts involve network infrastructure, which is outside the scope of the ACS Calling SDK.
+
+Your application should subscribe to events from the User Facing Diagnostics and display a message on the user interface, so that users are aware of network quality issues. While these issues are often temporary and recover soon, frequent occurrences of the `networkSendQuality` UFD event for a particular user may require further investigation.
+For example, users should check their network equipment or check with their internet service provider (ISP).
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services No Microphone Devices Enumerated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/no-microphone-devices-enumerated.md
+
+ Title: Understanding noMicrophoneDevicesEnumerated UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of noMicrophoneDevicesEnumerated UFD
++++ Last updated : 03/27/2024+++++
+# noMicrophoneDevicesEnumerated UFD
+The `noMicrophoneDevicesEnumerated` UFD event with a `true` value occurs when the browser API `navigator.mediaDevices.enumerateDevices` doesn't include any audio input devices.
+This means that there are no microphones available on the user's machine. This issue is caused by the user unplugging or disabling the microphone.
+
+> [!NOTE]
+> This UFD event is unrelated to the a user allowing microphone permission.
+
+Even if a user doesn't grant the microphone permission at the browser level, the `DeviceManager.getMicrophones` API still returns a microphone device info with an empty name, which indicates the presence of a microphone device on the user's machine.
+
+| noMicrophoneDevicesEnumeratedUFD | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'noMicrophoneDevicesEnumerated') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The noSpeakerDevicesEnumerated UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display a message on the user interface to alert users of any device setup issues. Users can then take steps to resolve the issue on their own, such as plugging in a headset or checking whether they disabled the microphone devices.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services No Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/no-network.md
+
+ Title: Understanding noNetwork UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of noNetwork UFD
++++ Last updated : 03/27/2024+++++
+# noNetwork UFD
+The `noNetwork` UFD event with a `true` value occurs when there's no network available for ICE candidates being gathered, which means there are network setup issues in the local environment, such as a disconnected Wi-Fi or Ethernet cable.
+Additionally, if the adapter fails to acquire an IP address and there are no other networks available, this situation can also result in `noNetwork` UFD event.
+
+| noNetwork UFD | Details |
+| ||
+| UFD type | NetworkDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).network.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'noNetwork') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // noNetwork UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display a message in your user interface to alert users of any network setup issues.
+Users can then take steps to resolve the issue on their own.
+
+Users should also check if they disabled the network adapters or whether they have an available network.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services No Speaker Devices Enumerated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/no-speaker-devices-enumerated.md
+
+ Title: Understanding noSpeakerDevicesEnumerated UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of noSpeakerDevicesEnumerated UFD
++++ Last updated : 03/27/2024+++++
+# noSpeakerDevicesEnumerated UFD
+The `noSpeakerDevicesEnumerated` UFD event with a `true` value occurs when there's no speaker device presented in the device list returned by the browser API. This issue occurs when the `navigator.mediaDevices.enumerateDevices` browser API doesn't include any audio output devices. This event indicates that there are no speakers available on the user's machine, which could be because the user unplugged or disabled the speaker.
+
+On some platforms such as iOS, the browser doesn't provide the audio output devices in the device list. In this case, the SDK considers it as expected behavior and doesn't fire `noSpeakerDevicesEnumerated` UFD event.
+
+| noSpeakerDevicesEnumerated UFD | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'noSpeakerDevicesEnumerated') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The noSpeakerDevicesEnumerated UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display a message on your user interface to alert users of any device setup issues.
+Users can then take steps to resolve the issue on their own, such as plugging in a headset or checking whether they disabled the speaker devices.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Screenshare Recording Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/screenshare-recording-disabled.md
+
+ Title: Understanding screenshareRecordingDisabled UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of screenshareRecordingDisabled UFD.
++++ Last updated : 03/27/2024+++++
+# screenshareRecordingDisabled UFD
+The `screenshareRecordingDisabled` UFD event with a `true` value occurs when the SDK detects that the screen sharing permission was denied in the browser or OS settings on macOS.
+
+| screenshareRecordingDisabled | Details |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'screenshareRecordingDisabled') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The screenshareRecordingDisabled UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+Your application should subscribe to events from the User Facing Diagnostics and display a message on the user interface to alert users of any screen sharing permission issues.
+Users can then take steps to resolve the issue on their own.
+
+Users should also check if they disabled the screen sharing permission from OS settings.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Speaking While Microphone Is Muted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/references/ufd/speaking-while-microphone-is-muted.md
+
+ Title: Understanding speakingWhileMicrophoneIsMuted UFD - User Facing Diagnostics
+
+description: Overview and detailed reference of speakingWhileMicrophoneIsMuted UFD
++++ Last updated : 03/27/2024+++++
+# speakingWhileMicrophoneIsMuted UFD
+The `speakingWhileMicrophoneIsMuted` UFD event with a `true` value occurs when the SDK detects that the audio input volume isn't muted although the user did mute the microphone.
+This event can remind the user who may want to speak something but forgot to unmute their microphone.
+In this case, since the microphone state in the SDK is muted, no audio is sent.
+
+| speakingWhileMicrophoneIsMuted | Detail |
+| --||
+| UFD type | MediaDiagnostics |
+| value type | DiagnosticFlag |
+| possible values | true, false |
+
+## Example
+```typescript
+call.feature(Features.UserFacingDiagnostics).media.on('diagnosticChanged', (diagnosticInfo) => {
+ if (diagnosticInfo.diagnostic === 'speakingWhileMicrophoneIsMuted') {
+ if (diagnosticInfo.value === true) {
+ // show a warning message on UI
+ } else {
+ // The speakingWhileMicrophoneIsMuted UFD recovered, notify the user
+ }
+ }
+});
+```
+## How to mitigate or resolve
+The `speakingWhileMicrophoneIsMuted` UFD event isn't an error, but rather an indication of an inconsistency between the audio input volume and the microphone's muted state in the SDK.
+The purpose of this event is for the application to show a message on your user interface as a hint, so the user can know that the microphone is muted while they're speaking.
+
+## Next steps
+* Learn more about [User Facing Diagnostics feature](../../../../../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md
Complete the following prerequisites and steps to set up the sample.
`git clone https://github.com/Azure-Samples/communication-services-web-chat-hero.git`
- Or clone the repo using any method described in [Clone an existing Git repo](https://learn.microsoft.com/azure/devops/repos/git/clone).
+ Or clone the repo using any method described in [Clone an existing Git repo](/azure/devops/repos/git/clone).
3. Get the `Connection String` and `Endpoint URL` from the Azure portal or by using the Azure CLI.
communication-services Migrating To Azure Communication Services Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md
Title: Tutorial - Migrating from Twilio video to ACS
+ Title: Tutorial - Migrate from Twilio Video to Azure Communication Services
-description: Learn how to migrate a calling product from Twilio to Azure Communication Services.
+description: Learn how to migrate a calling product from Twilio Video to Azure Communication Services.
zone_pivot_groups: acs-plat-web-ios-android
-# Migrating from Twilio Video to Azure Communication Services
+# Migrate from Twilio Video to Azure Communication Services
This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md). Both Twilio Video and Azure Communication Services Calling SDK are cloud-based platforms that enable developers to add voice and video calling features to their web applications.
However, there are some key differences between them that may affect your choice
## Key features available in Azure Communication Services Calling SDK -- **Addressing** - Azure Communication Services provides [identities](../concepts/identity-model.md) for authenticating and addressing communication endpoints. These identities are used within Calling APIs, providing clients with a clear view of who is connected to a call (the roster).-- **Encryption** - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way.-- **Device Management and Media enablement** - The SDK manages audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing.-- **PSTN calling** - You can use the SDK to initiate voice calling using the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically.-- **Teams Meetings** ΓÇô Azure Communication Services is equipped to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and video calls.-- **Notifications** - Azure Communication Services provides APIs to notify clients of incoming calls. This enables your application to listen for events (such as incoming calls) even when your application isn't running in the foreground.-- **User Facing Diagnostics** - Azure Communication Services uses [events](../concepts/voice-video-calling/user-facing-diagnostics.md) to provide insights into underlying issues that might affect call quality. You can subscribe your application to triggers such as weak network signals or muted microphones for proactive issue awareness.-- **Media Quality Statistics** - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md). Metrics include call quality information, empowering developers to enhance communication experiences.-- **Video Constraints** - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. The SDK supports different call situations for varied levels of video quality, so developers can adjust parameters like resolution and frame rate.
-| **Feature** | **Web (JavaScript)** | **iOS** | **Android** | **Agnostic** |
+| **Feature** | **Web (JavaScript)** | **iOS** | **Android** | **Platform neutral** |
|-|--|--|-|-| | **Install** | [✔️](../quickstarts/voice-video-calling/getting-started-with-calling.md?tabs=uwp&pivots=platform-web#install-the-package) | [✔️](../quickstarts/voice-video-calling/getting-started-with-calling.md?tabs=uwp&pivots=platform-ios#install-the-package-and-dependencies-with-cocoapods) | [✔️](../quickstarts/voice-video-calling/getting-started-with-calling.md?tabs=uwp&pivots=platform-android#install-the-package) | | | **Import** | [✔️](../quickstarts/voice-video-calling/getting-started-with-calling.md?tabs=uwp&pivots=platform-web#install-the-package) | [✔️](../quickstarts/voice-video-calling/getting-started-with-calling.md?tabs=uwp&pivots=platform-ios#install-the-package-and-dependencies-with-cocoapods) | [✔️](../quickstarts/voice-video-calling/getting-started-with-calling.md?tabs=uwp&pivots=platform-android#install-the-package) | |
However, there are some key differences between them that may affect your choice
| **Picture-in-picture** | | [✔️](../how-tos/ui-library-sdk/picture-in-picture.md?tabs=kotlin&pivots=platform-ios) | [✔️](../how-tos/ui-library-sdk/picture-in-picture.md?tabs=kotlin&pivots=platform-android) | |
-**For more information about using the Calling SDK on different platforms, see** [**Calling SDK overview > Detailed capabilities**](../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities)**.**
-If you're embarking on a new project from the ground up, see the [Quickstart: Add 1:1 video calling to your app](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web).
--
-### Calling support
-
-The Azure Communication Services Calling SDK supports the following streaming configurations:
-
-| Limit | Web | Windows/Android/iOS |
-||-|--|
-| Maximum \# of outgoing local streams that can be sent simultaneously | 1 video and 1 screen sharing | 1 video + 1 screen sharing |
-| Maximum \# of incoming remote streams that can be rendered simultaneously | 9 videos + 1 screen sharing on desktop browsers\*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing |
-
-## Call Types in Azure Communication Services
-
-Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. For more information, see [Voice and video concepts](../concepts/voice-video-calling/about-call-types.md).
--- **Voice Over IP (VoIP)** - When a user of your application calls another over an internet or data connection. Both signaling and media traffic are routed over the internet.-- **Public Switched Telephone Network (PSTN)** - When your users call a traditional telephone number, calls are facilitated via PSTN voice calling. To make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users.-- **One-to-One Calls** - When one of your users connects with another through our SDKs. You can establish the call via either VoIP or PSTN.-- **Group Calls** - When three or more participants connect in a single call. Any combination of VoIP and PSTN-connected users can be on a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot.-- **Rooms Call** - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, see the [Rooms overview](../concepts/rooms/room-concept.md). ::: zone pivot="platform-web" [!INCLUDE [Migrating to ACS on WebJS SDK](./includes/twilio-to-acs-video-webjs-tutorial.md)]
Azure Communication Services offers various call types. The type of call you cho
::: zone pivot="platform-android" [!INCLUDE [Migrating to ACS on Android SDK](./includes/twilio-to-acs-video-android-tutorial.md)]
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
You can use the following built-in connectors to access specific services and sy
[![Azure AI Search icon][azure-ai-search-icon]][azure-ai-search-doc] \ \
- [**Azure API Search**][azure-ai-search-doc]<br>(*Standard workflow only*)
+ [**Azure AI Search**][azure-ai-search-doc]<br>(*Standard workflow only*)
\ \ Connect to AI Search so that you can perform document indexing and search operations in your workflow.
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md
ms.suite: integration Previously updated : 01/22/2024 Last updated : 04/08/2024 # Call external HTTP or HTTPS endpoints from workflows in Azure Logic Apps
If an HTTP trigger or action includes these headers, Azure Logic Apps removes th
Although Azure Logic Apps won't stop you from saving logic apps that use an HTTP trigger or action with these headers, Azure Logic Apps ignores these headers.
+<a name="mismatch-content-type"></a>
+
+### Response content doesn't match the expected content type
+
+The HTTP action throws a **BadRequest** error if the HTTP action calls the backend API with the `Content-Type` header set to **application/json**, but the response from the backend doesn't actually contain content in JSON format, which fails internal JSON format validation.
+ ## Next steps * [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
container-apps Jobs Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md
To use manual jobs, you first create a job with trigger type `Manual` and then s
az containerapp job create \ --name "$JOB_NAME" --resource-group "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type "Manual" \
- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" ```
Create a job in the Container Apps environment that starts every minute using th
az containerapp job create \ --name "$JOB_NAME" --resource-group "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type "Schedule" \
- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" \ --cron-expression "*/1 * * * *"
container-apps Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs.md
Previously updated : 08/17/2023 Last updated : 04/02/2024
The following table compares common scenarios for apps and jobs:
| An HTTP server that serves web content and API requests | App | Configure an [HTTP scale rule](scale-app.md#http). | | A process that generates financial reports nightly | Job | Use the [*Schedule* job type](#scheduled-jobs) and configure a cron expression. | | A continuously running service that processes messages from an Azure Service Bus queue | App | Configure a [custom scale rule](scale-app.md#custom). |
-| A job that processes a single message or a small batch of messages from an Azure queue and exits | Job | Use the *Event* job type and [configure a custom scale rule](tutorial-event-driven-jobs.md) to trigger job executions. |
+| A job that processes a single message or a small batch of messages from an Azure queue and exits | Job | Use the *Event* job type and [configure a custom scale rule](tutorial-event-driven-jobs.md) to trigger job executions when there are messages in the queue. |
| A background task that's triggered on-demand and exits when finished | Job | Use the *Manual* job type and [start executions](#start-a-job-execution-on-demand) manually or programmatically using an API. | | A self-hosted GitHub Actions runner or Azure Pipelines agent | Job | Use the *Event* job type and configure a [GitHub Actions](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-github-actions) or [Azure Pipelines](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-azure-pipelines) scale rule. | | An Azure Functions app | App | [Deploy Azure Functions to Container Apps](../azure-functions/functions-container-apps-hosting.md). | | An event-driven app using the Azure WebJobs SDK | App | [Configure a scale rule](scale-app.md#custom) for each event source. |
+## Concepts
+
+A Container Apps environment is a secure boundary around one or more container apps and jobs. Jobs involve a few key concepts:
+
+* **Job:** A job defines the default configuration that is used for each job execution. The configuration includes the container image to use, the resources to allocate, and the command to run.
+* **Job execution:** A job execution is a single run of a job that is triggered manually, on a schedule, or in response to an event.
+* **Job replica:** A typical job execution runs one replica defined by the job's configuration. In advanced scenarios, a job execution can run multiple replicas.
++ ## Job trigger types A job's trigger type determines how the job is started. The following trigger types are available:
A job's trigger type determines how the job is started. The following trigger ty
### Manual jobs
-Manual jobs are triggered on-demand using the Azure CLI or a request to the Azure Resource Manager API.
+Manual jobs are triggered on-demand using the Azure CLI, Azure portal, or a request to the Azure Resource Manager API.
Examples of manual jobs include:
To create a manual job using the Azure CLI, use the `az containerapp job create`
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Manual" \
- --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" ```
Container Apps jobs use cron expressions to define schedules. It supports the st
| `0 0 * * 0` | Runs every Sunday at midnight. | | `0 0 1 * *` | Runs on the first day of every month at midnight. |
-Cron expressions in scheduled jobs are evaluated in Universal Time Coordinated (UTC).
+Cron expressions in scheduled jobs are evaluated in Coordinated Universal Time (UTC).
# [Azure CLI](#tab/azure-cli)
To create a scheduled job using the Azure CLI, use the `az containerapp job crea
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Schedule" \
- --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" \ --cron-expression "*/1 * * * *"
Event-driven jobs are triggered by events from supported [custom scalers](scale-
Container apps and event-driven jobs use [KEDA](https://keda.sh/) scalers. They both evaluate scaling rules on a polling interval to measure the volume of events for an event source, but the way they use the results is different.
-In an app, each replica continuously processes events and a scaling rule determines the number of replicas to run to meet demand. In event-driven jobs, each job typically processes a single event, and a scaling rule determines the number of jobs to run.
+In an app, each replica continuously processes events and a scaling rule determines the number of replicas to run to meet demand. In event-driven jobs, each job execution typically processes a single event, and a scaling rule determines the number of job executions to run.
Use jobs when each event requires a new instance of the container with dedicated resources or needs to run for a long time. Event-driven jobs are conceptually similar to [KEDA scaling jobs](https://keda.sh/docs/latest/concepts/scaling-jobs/).
To create an event-driven job using the Azure CLI, use the `az containerapp job
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Event" \
- --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 \
--image "docker.io/myuser/my-event-driven-job:latest" \ --cpu "0.25" --memory "0.5Gi" \ --min-executions "0" \
To start a job execution using the Azure Resource Manager REST API, make a `POST
The following example starts an execution of a job named `my-job` in a resource group named `my-resource-group`: ```http
-POST https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group/providers/Microsoft.App/jobs/my-job/start?api-version=2022-11-01-preview
+POST https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group/providers/Microsoft.App/jobs/my-job/start?api-version=2023-05-01
Authorization: Bearer <TOKEN> ``` Replace `<SUBSCRIPTION_ID>` with your subscription ID.
-To authenticate the request, replace `<TOKEN>` in the `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure).
+To authenticate the request, replace `<TOKEN>` in the `Authorization` header with a valid bearer token. The identity used to generate the token must have `Contributor` permission to the Container Apps job resource. For more information, see [Azure REST API reference](/rest/api/azure).
# [Azure portal](#tab/azure-portal)
To start a job execution in the Azure portal, select **Run now** in the job's ov
When you start a job execution, you can choose to override the job's configuration. For example, you can override an environment variable or the startup command to run the same job with different inputs. The overridden configuration is only used for the current execution and doesn't change the job's configuration.
+> [!IMPORTANT]
+> When overriding the configuration, the job's entire template configuration is replaced with the new configuration. Ensure that the new configuration includes all required settings.
+ # [Azure CLI](#tab/azure-cli) To override the job's configuration while starting an execution, use the `az containerapp job start` command and pass a YAML file containing the template to use for the execution. The following example starts an execution of a job named `my-job` in a resource group named `my-resource-group`.
Retrieve the job's current configuration with the `az containerapp job show` com
az containerapp job show --name "my-job" --resource-group "my-resource-group" --query "properties.template" --output yaml > my-job-template.yaml ```
+The `--query "properties.template"` option returns only the job's template configuration.
+ Edit the `my-job-template.yaml` file to override the job's configuration. For example, to override the environment variables, modify the `env` section: ```yaml
az containerapp job start --name "my-job" --resource-group "my-resource-group" \
To override the job's configuration, include a template in the request body. The following example overrides the startup command to run a different command: ```http
-POST https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group/providers/Microsoft.App/jobs/my-job/start?api-version=2022-11-01-preview
+POST https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group/providers/Microsoft.App/jobs/my-job/start?api-version=2023-05-01
Content-Type: application/json Authorization: Bearer <TOKEN>
Authorization: Bearer <TOKEN>
} ```
-Replace `<SUBSCRIPTION_ID>` with your subscription ID and `<TOKEN>` in the `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure).
+Replace `<SUBSCRIPTION_ID>` with your subscription ID and `<TOKEN>` in the `Authorization` header with a valid bearer token. The identity used to generate the token must have `Contributor` permission to the Container Apps job resource. For more information, see [Azure REST API reference](/rest/api/azure).
# [Azure portal](#tab/azure-portal)
az containerapp job execution list --name "my-job" --resource-group "my-resource
To get the status of job executions using the Azure Resource Manager REST API, make a `GET` request to the job's `executions` operation. The following example returns the status of the most recent execution of a job named `my-job` in a resource group named `my-resource-group`: ```http
-GET https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group/providers/Microsoft.App/jobs/my-job/executions?api-version=2022-11-01-preview
+GET https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group/providers/Microsoft.App/jobs/my-job/executions?api-version=2023-05-01
``` Replace `<SUBSCRIPTION_ID>` with your subscription ID.
Container Apps jobs support advanced configuration options such as container set
### Container settings
-Container settings define the containers to run in each replica of a job execution. They include environment variables, secrets, and resource limits. For more information, see [Containers](containers.md).
+Container settings define the containers to run in each replica of a job execution. They include environment variables, secrets, and resource limits. For more information, see [Containers](containers.md). Running multiple containers in a single job is an advanced scenario. Most jobs run a single container.
### Job settings
The following table includes the job settings that you can configure:
| Setting | Azure Resource Manager property | CLI parameter| Description | ||||| | Job type | `triggerType` | `--trigger-type` | The type of job. (`Manual`, `Schedule`, or `Event`) |
-| Parallelism | `parallelism` | `--parallelism` | The number of replicas to run per execution. For most jobs, set the value to `1`. |
-| Replica completion count | `replicaCompletionCount` | `--replica-completion-count` | The number of replicas to complete successfully for the execution to succeed. For most jobs, set the value to `1`. |
| Replica timeout | `replicaTimeout` | `--replica-timeout` | The maximum time in seconds to wait for a replica to complete. |
+| Polling interval | `pollingInterval` | `--polling-interval` | The time in seconds to wait between polling for events. Default is 30 seconds. |
| Replica retry limit | `replicaRetryLimit` | `--replica-retry-limit` | The maximum number of times to retry a failed replica. To fail a replica without retrying, set the value to `0`. |
+| Parallelism | `parallelism` | `--parallelism` | The number of replicas to run per execution. For most jobs, set the value to `1`. |
+| Replica completion count | `replicaCompletionCount` | `--replica-completion-count` | The number of replicas to complete successfully for the execution to succeed. Most be equal or less than the parallelism. For most jobs, set the value to `1`. |
### Example
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The following quotas are on a per subscription basis for Azure Container Apps.
-To request an increase in quota amounts for your container app, learn [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-) and [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+To request a [quota increase](faq.yml#how-can-i-request-a-quota-increase-), you can [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
The *Is Configurable* column in the following tables denotes a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/). For more information, see [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-).
-| Feature | Scope | Default | Is Configurable | Remarks |
+| Feature | Scope | Default Quota | Is Configurable | Remarks |
|--|--|--|--|--|
-| Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region. |
-| Environments | Global | Up to 20 | Yes | Limit up to 20 environments per subscription across all regions |
+| Environments | Region | Up to 15 | Yes | Up to 15 environments per subscription, per region. |
+| Environments | Global | Up to 20 | Yes | Up to 20 environments per subscription, across all regions. |
| Container Apps | Environment | Unlimited | n/a | |
-| Revisions | Container app | 100 | No | |
-| Replicas | Revision | 300 | Yes | |
+| Revisions | Container app | Up to 100 | No | |
+| Replicas | Revision | Unlimited | No | Maximum replicas configurable are 300 in Azure portal and 1000 in Azure CLI. There must also be enough cores quota available. |
## Consumption plan
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Scaling is defined by the combination of limits, rules, and behavior.
| Scale limit | Default value | Min value | Max value | |||||
- | Minimum number of replicas per revision | 0 | 0 | 300 |
- | Maximum number of replicas per revision | 10 | 1 | 300 |
+ | Minimum number of replicas per revision | 0 | 0 | Maximum replicas configurable are 300 in Azure portal and 1,000 in Azure CLI. |
+ | Maximum number of replicas per revision | 10 | 1 | Maximum replicas configurable are 300 in Azure portal and 1,000 in Azure CLI. |
- To request an increase in maximum replica amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+ For more information see [Quotas for Azure Container Apps](quotas.md).
- **Rules** are the criteria used by Container Apps to decide when to add or remove replicas.
container-apps Spring Cloud Config Server Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/spring-cloud-config-server-usage.md
You can use client side decryption of properties by following the steps:
## Next steps > [!div class="nextstepaction"]
-> [Set up a Spring Cloud Config Server](spring-cloud-config-server.md)
+> [Tutorial: Connect to a managed Spring Cloud Config Server](spring-cloud-config-server.md)
container-apps Spring Cloud Eureka Server Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/spring-cloud-eureka-server-usage.md
Now you have a caller and callee application that communicate with each other us
## Next steps > [!div class="nextstepaction"]
-> [Use Spring Cloud Eureka Server](spring-cloud-eureka-server.md)
+> [Tutorial: Connect to a managed Spring Cloud Eureka Server](spring-cloud-eureka-server.md)
container-apps Tutorial Event Driven Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-event-driven-jobs.md
In this tutorial, you learn how to work with [event-driven jobs](jobs.md#event-d
> * Deploy the job to the Container Apps environment > * Verify that the queue messages are processed by the container app
-The job you create starts an execution for each message that is sent to an Azure Storage Queue. Each job execution runs a container that performs the following steps:
+The job you create starts an execution for each message that is sent to an Azure Storage queue. Each job execution runs a container that performs the following steps:
-1. Dequeues one message from the queue.
+1. Gets one message from the queue.
1. Logs the message to the job execution logs. 1. Deletes the message from the queue. 1. Exits.
+> [!IMPORTANT]
+> The scaler monitors the queue's length to determine how many jobs to start. For accurate scaling, don't delete a message from the queue until the job execution has finished processing it.
+ The source code for the job you run in this tutorial is available in an Azure Samples [GitHub repository](https://github.com/Azure-Samples/container-apps-event-driven-jobs-tutorial/blob/main/index.js). [!INCLUDE [container-apps-create-cli-steps-jobs.md](../../includes/container-apps-create-cli-steps-jobs.md)]
To deploy the job, you must first build a container image for the job and push i
--environment "$ENVIRONMENT" \ --trigger-type "Event" \ --replica-timeout "1800" \
- --replica-retry-limit "1" \
- --replica-completion-count "1" \
- --parallelism "1" \
--min-executions "0" \ --max-executions "10" \ --polling-interval "60" \
To deploy the job, you must first build a container image for the job and push i
| Parameter | Description | | | | | `--replica-timeout` | The maximum duration a replica can execute. |
- | `--replica-retry-limit` | The number of times to retry a replica. |
- | `--replica-completion-count` | The number of replicas to complete successfully before a job execution is considered successful. |
- | `--parallelism` | The number of replicas to start per job execution. |
| `--min-executions` | The minimum number of job executions to run per polling interval. | | `--max-executions` | The maximum number of job executions to run per polling interval. | | `--polling-interval` | The polling interval at which to evaluate the scale rule. |
container-instances Container Instances Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-powershell.md
First, create a resource group named *myResourceGroup* in the *eastus* location
New-AzResourceGroup -Name myResourceGroup -Location EastUS ```
-## Create a container
+## Create a container group
-Now that you have a resource group, you can run a container in Azure. To create a container instance with Azure PowerShell, provide a resource group name, container instance name, and Docker container image to the [New-AzContainerGroup][New-AzContainerGroup] cmdlet. In this quickstart, you use the public `mcr.microsoft.com/windows/servercore/iis:nanoserver` image. This image packages Microsoft Internet Information Services (IIS) to run in Nano Server.
+Now that you have a resource group, you can run a container in Azure. To create a container instance with Azure PowerShell, you'll first need to create a `ContainerInstanceObject` by providing a name and image for the container. In this quickstart, you use the public `mcr.microsoft.com/windows/servercore/iis:nanoserver` image. This image packages Microsoft Internet Information Services (IIS) to run in Nano Server.
+
+```azurepowershell-interactive
+New-AzContainerInstanceObject -Name myContainer -Image mcr.microsoft.com/windows/servercore/iis:nanoserver
+```
+
+Next, use the [New-AzContainerGroup][New-AzContainerGroup] cmdlet. You need to provide a name for the container group, your resource group's name, a location for the container group, the container instance you just created, the operating system type, and a unique IP address DNS name label.
You can expose your containers to the internet by specifying one or more ports to open, a DNS name label, or both. In this quickstart, you deploy a container with a DNS name label so that IIS is publicly reachable.
-Execute a command similar to the following to start a container instance. Set a `-DnsNameLabel` value that's unique within the Azure region where you create the instance. If you receive a "DNS name label not available" error message, try a different DNS name label.
+Execute a command similar to the following to start a container instance. Set a `-IPAddressDnsNameLabel` value that's unique within the Azure region where you create the instance. If you receive a "DNS name label not available" error message, try a different DNS name label.
```azurepowershell-interactive
-New-AzContainerGroup -ResourceGroupName myResourceGroup -Name mycontainer -Image mcr.microsoft.com/windows/servercore/iis:nanoserver -OsType Windows -DnsNameLabel aci-demo-win
+New-AzContainerInstanceObject -ResourceGroupName myResourceGroup -Name myContainerGroup -Location EastUS -Container myContainer -OsType Windows -IPAddressDnsNameLabel aci-demo-win
``` Within a few seconds, you should receive a response from Azure. The container's `ProvisioningState` is initially **Creating**, but should move to **Succeeded** within a minute or two. Check the deployment state with the [Get-AzContainerGroup][Get-AzContainerGroup] cmdlet: ```azurepowershell-interactive
-Get-AzContainerGroup -ResourceGroupName myResourceGroup -Name mycontainer
+Get-AzContainerGroup -ResourceGroupName myResourceGroup -Name myContainerGroup
``` The container's provisioning state, fully qualified domain name (FQDN), and IP address appear in the cmdlet's output: ```console
-PS Azure:\> Get-AzContainerGroup -ResourceGroupName myResourceGroup -Name mycontainer
+PS Azure:\> Get-AzContainerGroup -ResourceGroupName myResourceGroup -Name myContainerGroup
ResourceGroupName : myResourceGroup
-Id : /subscriptions/<Subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.ContainerInstance/containerGroups/mycontainer
-Name : mycontainer
+Id : /subscriptions/<Subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.ContainerInstance/containerGroups/myContainerGroup
+Name : myContainerGroup
Type : Microsoft.ContainerInstance/containerGroups Location : eastus Tags : ProvisioningState : Creating
-Containers : {mycontainer}
+Containers : {myContainer}
ImageRegistryCredentials : RestartPolicy : Always IpAddress : 52.226.19.87
Once the container's `ProvisioningState` is **Succeeded**, navigate to its `Fqdn
When you're done with the container, remove it with the [Remove-AzContainerGroup][Remove-AzContainerGroup] cmdlet: ```azurepowershell-interactive
-Remove-AzContainerGroup -ResourceGroupName myResourceGroup -Name mycontainer
+Remove-AzContainerGroup -ResourceGroupName myResourceGroup -Name myContainerGroup
``` ## Next steps
copilot Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/capabilities.md
While Microsoft Copilot for Azure (preview) can perform many types of tasks, it'
Keep in mind these current limitations: -- The number of chats per day that a user can have, and the number of requests per chat, are limited. When you open Microsoft Copilot for Azure (preview), you'll see details about these limitations.
+- Any action taken on more than 10 resources must be performed outside of Microsoft Copilot for Azure.
+
+- You can only make 15 requests during any given chat, and you only have 10 chats in a 24 hour period.
+ - Some responses that display lists will be limited to the top five items. - For some tasks and queries, using a resource's name will not work, and the Azure resource ID must be provided. - Microsoft Copilot for Azure (preview) is currently available in English only.
cosmos-db Cmk Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cmk-troubleshooting-guide.md
You see this error when the Azure Key Vault or specified Key are not found.
Check if the Azure Key Vault or the specified key exist and restore them if accidentally got deleted, then wait for one hour. If the issue isn't resolved after more than 2 hours, contact customer service.
+## Azure key Disabled or expired
+
+### Reason for error
+
+You see this error when the Azure Key Vault key has been expired or deleted.
+
+### Troubleshooting
+
+If your key has been disabled please enable it. If it has been expired please un-expire it, and once the account is not revoked anymore feel free to rotate the key as Azure Cosmos DB will update the key version once the account is online.
+ ## Invalid Azure Cosmos DB default identity ### Reason for error
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
Last updated 07/08/2022
Azure Cosmos DB free tier makes it easy to get started, develop, test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account for free. The throughput and storage consumed beyond these limits are billed at regular price. Free tier is available for all API accounts with provisioned throughput, autoscale throughput, single, or multiple write regions.
-Free tier lasts indefinitely for the lifetime of the account and it comes with all the [benefits and features](introduction.md#an-ai-database-with-unmatched-reliability-and-flexibility) of a regular Azure Cosmos DB account. These benefits include unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.
+Free tier lasts indefinitely for the lifetime of the account and it comes with all the [benefits and features](introduction.md#with-unmatched-reliability-and-flexibility) of a regular Azure Cosmos DB account. These benefits include unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.
You can have up to one free tier Azure Cosmos DB account per an Azure subscription and you must opt in when creating the account. If you don't see the option to apply the free tier discount, another account in the subscription has already been enabled with free tier. If you create an account with free tier and then delete it, you can apply free tier for a new account. When creating a new account, itΓÇÖs recommended to enable the free tier discount if itΓÇÖs available.
cosmos-db How To Restore In Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-restore-in-account-continuous-backup.md
Previously updated : 05/08/2023 Last updated : 03/21/2024 zone_pivot_groups: azure-cosmos-db-apis-nosql-mongodb-gremlin-table
Use the Azure CLI to restore a deleted container or database. Child containers a
ΓÇ» ΓÇ» --resource-group <resource-group-name> \ΓÇ» ΓÇ» ΓÇ» --account-name <account-name> \ΓÇ» ΓÇ» ΓÇ» --name <database-name> \
- ΓÇ» ΓÇ» --restore-timestamp <timestamp>
+ ΓÇ» ΓÇ» --restore-timestamp <timestamp> \
+ --disable-ttl True
``` 1. Initiate a restore operation for a deleted container by using [az cosmosdb sql container restore](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-restore):
Use the Azure CLI to restore a deleted container or database. Child containers a
ΓÇ» ΓÇ» --resource-group <resource-group-name> \ΓÇ» ΓÇ» ΓÇ» --account-name <account-name> \ΓÇ» ΓÇ» ΓÇ» --database-name <database-name> \
- --name <container-name> \
- ΓÇ» ΓÇ» --restore-timestamp <timestamp>
+ --name <container-name> \
+ --restore-timestamp <timestamp> \
+ --disable-ttl True
``` :::zone-end
Use the Azure CLI to restore a deleted container or database. Child containers a
ΓÇ» ΓÇ» --account-name <account-name> \ΓÇ» ΓÇ» ΓÇ» --name <database-name> \ ΓÇ» ΓÇ» --restore-timestamp <timestamp>
+ --disable-ttl True
``` 1. Initiate a restore operation for a deleted collection by using [az cosmosdb mongodb collection restore](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-restore):
Use the Azure CLI to restore a deleted container or database. Child containers a
ΓÇ» ΓÇ» --account-name <account-name> \ΓÇ» ΓÇ» ΓÇ» --database-name <database-name> \ --name <container-name> \
- ΓÇ» ΓÇ» --restore-timestamp <timestamp>
+ ΓÇ» ΓÇ» --restore-timestamp <timestamp> \
+ --disable-ttl True
``` :::zone-end
Use the Azure CLI to restore a deleted container or database. Child containers a
--resource-group <resource-group-name> \ΓÇ» --account-name <account-name> \ΓÇ» --name <database-name> \
- --restore-timestamp <timestamp>
+ --restore-timestamp <timestamp> \
+ --disable-ttl True
``` 1. Initiate a restore operation for a deleted graph by using [az cosmosdb gremlin graph restore](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-restore):
Use the Azure CLI to restore a deleted container or database. Child containers a
--account-name <account-name> \ΓÇ» --database-name <database-name> \ --name <graph-name> \
- --restore-timestamp <timestamp>
+ --restore-timestamp <timestamp> \
+ --disable-ttl True
``` :::zone-end
Use the Azure CLI to restore a deleted container or database. Child containers a
ΓÇ» ΓÇ» --resource-group <resource-group-name> \ ΓÇ» ΓÇ» --account-name <account-name> \ ΓÇ» ΓÇ» --table-name <table-name> \
- ΓÇ» ΓÇ» --restore-timestamp <timestamp>
+ ΓÇ» ΓÇ» --restore-timestamp <timestamp> \
+ --disable-ttl True
``` :::zone-end
Use Azure PowerShell to restore a deleted container or database. Child container
DatabaseName = "<database-name>" Name = "<container-name>" RestoreTimestampInUtc = "<timestamp>"
+ DisableTtl= $true
} Restore-AzCosmosDBSqlContainer @parameters ```
Use Azure PowerShell to restore a deleted container or database. Child container
AccountName = "<account-name>" Name = "<database-name>" RestoreTimestampInUtc = "<timestamp>"
+ DisableTtl=$true
} Restore-AzCosmosDBMongoDBDatabase @parameters ```
Use Azure PowerShell to restore a deleted container or database. Child container
DatabaseName = "<database-name>" Name = "<collection-name>" RestoreTimestampInUtc = "<timestamp>"
+ DisableTtl=$true
} Restore-AzCosmosDBMongoDBCollection @parametersΓÇ» ```
Use Azure PowerShell to restore a deleted container or database. Child container
AccountName = "<account-name>" Name = "<database-name>" RestoreTimestampInUtc = "<timestamp>"
+ DisableTtl=$true
} Restore-AzCosmosDBGremlinDatabase @parameters ```
Use Azure PowerShell to restore a deleted container or database. Child container
DatabaseName = "<database-name>" Name = "<graph-name>" RestoreTimestampInUtc = "<timestamp>"
+ DisableTtl=$true
} Restore-AzCosmosDBGremlinGraph @parameters ```
Use Azure PowerShell to restore a deleted container or database. Child container
AccountName = "<account-name>" Name = "<table-name>" RestoreTimestampInUtc = "<timestamp>"
+ DisableTtl=$true
} Restore-AzCosmosDBTable @parameters ```
You can restore deleted containers and databases by using an Azure Resource Mana
"name": "<name-of-database-or-container>", "restoreParameters": { "restoreSource": "<source-account-instance-id>",
- "restoreTimestampInUtc": "<timestamp>"
+ "restoreTimestampInUtc": "<timestamp>",
+ "restoreWithTtlDisabled": "true"
}, "createMode": "Restore" }
You can restore deleted containers and databases by using an Azure Resource Mana
"name": "<name-of-database-or-collection>", "restoreParameters": { "restoreSource": "<source-account-instance-id>",
- "restoreTimestampInUtc": "<timestamp>"
+ "restoreTimestampInUtc": "<timestamp>",
+ "restoreWithTtlDisabled": "true"
}, "createMode": "Restore" }
You can restore deleted containers and databases by using an Azure Resource Mana
"name": "<name-of-database-or-graph>", "restoreParameters": { "restoreSource": "<source-account-instance-id>",
- "restoreTimestampInUtc": "<timestamp>"
+ "restoreTimestampInUtc": "<timestamp>",
+ "restoreWithTtlDisabled": "true"
}, "createMode": "Restore" }
You can restore deleted containers and databases by using an Azure Resource Mana
"name": "<name-of-table>", "restoreParameters": { "restoreSource": "<source-account-instance-id>",
- "restoreTimestampInUtc": "<timestamp>"
+ "restoreTimestampInUtc": "<timestamp>",
+ "restoreWithTtlDisabled": "true"
}, "createMode": "Restore" }
cosmos-db How To Setup Customer Managed Keys Existing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md
ms.devlang: azurecli
-# Configure customer-managed keys for your existing Azure Cosmos DB account with Azure Key Vault (Preview)
+# Configure customer-managed keys for your existing Azure Cosmos DB account with Azure Key Vault
[!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
Enabling a second layer of encryption for data at rest using [Customer Managed K
This feature eliminates the need for data migration to a new account to enable CMK. It helps to improve customersΓÇÖ security and compliance posture.
-> [!NOTE]
-> Currently, enabling customer-managed keys on existing Azure Cosmos DB accounts is in preview. This preview is provided without a service-level agreement. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Enabling CMK kicks off a background, asynchronous process to encrypt all the existing data in the account, while new incoming data are encrypted before persisting. There's no need to wait for the asynchronous operation to succeed. The enablement process consumes unused/spare RUs so that it doesn't affect your read/write workloads. You can refer to this [link](./how-to-setup-customer-managed-keys.md?tabs=azure-powershell#how-do-customer-managed-keys-influence-capacity-planning) for capacity planning once your account is encrypted. ## Get started by enabling CMK on your existing accounts
+> [!IMPORTANT]
+> Go through the prerequisites section thoroughly. These are important considerations.
+ ### Prerequisites All the prerequisite steps needed while configuring Customer Managed Keys for new accounts is applicable to enable CMK on your existing account. Refer to the steps [here](./how-to-setup-customer-managed-keys.md?tabs=azure-portal#prerequisites)
+It is important to note that enabling encryption on your Azure Cosmos DB account will add a small overhead to your document's ID, limiting the maximum size of the document ID to 990 bytes instead of 1024 bytes. If your account has any documents with IDs larger than 990 bytes, the encryption process will fail until those documents are deleted.
+
+To verify if your account is compliant, you can use the provided console application [hosted here](https://github.com/AzureCosmosDB/Cosmos-DB-Non-CMK-to-CMK-Migration-Scanner) to scan your account. Make sure that you are using the endpoint from your 'sqlEndpoint' account property, no matter the API selected.
+
+If you wish to disable server-side validation for this during migration, please contact support.
+ ### Steps to enable CMK on your existing account To enable CMK on an existing account, update the account with an ARM template setting a Key Vault key identifier in the keyVaultKeyUri property ΓÇô just like you would when enabling CMK on a new account. This step can be done by issuing a PATCH call with the following payload:
The state of the key is checked when CMK encryption is triggered. If the key in
**Can we enable CMK encryption on our existing production account?**
-Yes. Since the capability is currently in preview, we recommend testing all scenarios first on nonproduction accounts and once you're comfortable you can consider production accounts.
+Yes. Go through the prerequisite section thoroughly. We recommend testing all scenarios first on nonproduction accounts and once you're comfortable you can consider production accounts.
## Next steps
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Data stored in your Azure Cosmos DB account is automatically and seamlessly encr
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos DB account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account. > [!NOTE]
-> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation. Enabling customer-managed keys on your existing accounts is available for preview. You can refer to the link [here](how-to-setup-customer-managed-keys-existing-accounts.md) for more details
+> If you wish to enable customer-managed keys on your existing Azure Cosmos DB accounts then you can refer to the link [here](how-to-setup-customer-managed-keys-existing-accounts.md) for more details
> [!WARNING] > The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
The surge of AI-powered applications created another layer of complexity, becaus
Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from caching to backup to vector search. It provides the data infrastructure for modern applications like AI, digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
-## An AI database providing industry-leading capabilities... for free
+## An AI database providing industry-leading capabilities...
+
+## ...for free
Azure Cosmos DB is a fully managed NoSQL, relational, and vector database. It offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
App development is faster and more productive thanks to:
- Open source APIs - SDKs for popular languages - AI database functionalities like integrated vector database or seamless integration with Azure AI Services to support Retrieval Augmented Generation-- Query Copilot for generating NoSQL queries based on your natural language prompts [(preview)](nosql/query/how-to-enable-use-copilot.md)
+- Query Copilot for generating NoSQL queries based on your natural language prompts ([preview](nosql/query/how-to-enable-use-copilot.md))
As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates, and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
-If you're an existing Azure AI or GitHub Copilot customer, you may try Azure Cosmos DB for free with 40,000 [RU/s](request-units.md) of throughput for 90 days under the Azure AI Advantage offer.
+If you're an existing Azure AI or GitHub Copilot customer, you may try Azure Cosmos DB for free with 40,000 [RU/s](request-units.md) (equivalent of up to $6,000) of throughput for 90 days under the [Azure AI Advantage offer](ai-advantage.md).
-> [!div class="nextstepaction"]
-> [90-day Free Trial with Azure AI Advantage](ai-advantage.md)
+Alternatively, you may use the [Azure Cosmos DB lifetime free tier](free-tier.md) with the first 1000 [RU/s](request-units.md) of throughput and 25 GB of storage free.
-If you aren't an Azure customer, you may use the [30-day Free Trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/). No commitment follows the end of your trial period.
+If you aren't already using Azure, you may Try Azure Cosmos DB free for 30 days without an Azure subscription ([learn more](https://azure.microsoft.com/try/cosmosdb/)). No commitment follows the end of your trial period.
-Alternatively, you may use the [Azure Cosmos DB lifetime free tier](free-tier.md) with the first 1000 [RU/s](request-units.md) of throughput and 25 GB of storage free.
+> [!div class="nextstepaction"]
+> [Try Azure Cosmos DB free](https://azure.microsoft.com/try/cosmosdb/)
> [!TIP] > To learn more about Azure Cosmos DB, join us every Thursday at 1PM Pacific on Azure Cosmos DB Live TV. See the [Upcoming session schedule and past episodes](https://gotcosmos.com/tv).
-## An AI database for more than just AI apps
+## ...for more than just AI apps
Besides AI, Azure Cosmos DB should also be your goto database for web, mobile, gaming, and IoT applications. Azure Cosmos DB is well positioned for solutions that handle massive amounts of data, reads, and writes at a global scale with near-real response times. Azure Cosmos DB's guaranteed high availability, high throughput, low latency, and tunable consistency are huge advantages when building these types of applications. Learn about how Azure Cosmos DB can be used to build IoT and telematics, retail and marketing, gaming and web and mobile applications.
-## An AI database with unmatched reliability and flexibility
+## ...with unmatched reliability and flexibility
### Guaranteed speed at any scale
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/free-tier.md
Azure Cosmos DB for MongoDB vCore now introduces a new SKU, the "Free Tier," ena
boasting command and feature parity with a regular Azure Cosmos DB for MongoDB vCore account. It makes it easy for you to get started, develop, test your applications, or even run small production workloads for free. With Free Tier, you get a dedicated MongoDB cluster with 32-GB storage, perfect
-for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the West Europe, Southeast Asia, East US and East US 2 regions.
+for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available in the Southeast Asia region.
## Get started
specify your storage requirements, and you're all set. Rest assured, your data,
## Restrictions * For a given subscription, only one free tier account is permissible.
-* Free tier is currently available in West Europe, Southeast Asia, East US and East US 2 regions only.
+* Free tier is currently available in the Southeast Asia region only.
* High availability, Azure Active Directory (Azure AD) and Diagnostic Logging are not supported.
cosmos-db Optimize Dev Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-dev-test.md
This article describes the different options to use Azure Cosmos DB for developm
Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free.
-Free tier lasts indefinitely for the lifetime of the account and comes with all the [benefits and features](introduction.md#an-ai-database-with-unmatched-reliability-and-flexibility) of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more. You can create a free tier account using Azure portal, CLI, PowerShell, and a Resource Manager template. To learn more, see how to [create a free tier account](free-tier.md) article and the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
+Free tier lasts indefinitely for the lifetime of the account and comes with all the [benefits and features](introduction.md#with-unmatched-reliability-and-flexibility) of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more. You can create a free tier account using Azure portal, CLI, PowerShell, and a Resource Manager template. To learn more, see how to [create a free tier account](free-tier.md) article and the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
## Azure free account
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
description: Learn how to identify the restore time and restore a live or delete
Previously updated : 03/31/2023 Last updated : 03/21/2024
Before restoring the account, install the [latest version of Azure PowerShell](/
### <a id="trigger-restore-ps"></a>Trigger a restore operation for API for NoSQL account
-The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, PublicNetworkAccess and timestamp:
+The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, PublicNetworkAccess, DisableTtl, and timestamp:
Restore-AzCosmosDBAccount `
-SourceDatabaseAccountName "SourceDatabaseAccountName" ` -RestoreTimestampInUtc "UTCTime" ` -Location "AzureRegionName" `
- -PublicNetworkAccess Disabled
+ -PublicNetworkAccess Disabled `
+ -DisableTtl $true
```
Restore-AzCosmosDBAccount `
-RestoreTimestampInUtc "2021-01-05T22:06:00" ` -Location "West US" ` -PublicNetworkAccess Disabled
+ -DisableTtl $false
+ ```
-If `PublicNetworkAccess` is not set, restored account is accessible from public network, please ensure to pass `Disabled` to the `PublicNetworkAccess` option to disable public network access for restored account.
+If `PublicNetworkAccess` is not set, restored account is accessible from public network, please ensure to pass `Disabled` to the `PublicNetworkAccess` option to disable public network access for restored account. Setting DisableTtl to $true ensures TTL is disabled on restored account, not providing parameter restores the account with TTL enabled if it was set earlier.
> [!NOTE] > For restoring with public network access disabled, the minimum stable version of Az.CosmosDB required is 1.12.0.
az cosmosdb restore \
--restore-timestamp 2020-07-13T16:03:41+0000 \ --resource-group <MyResourceGroup> \ --location "West US" \
- --public-network-access Disabled
+ --public-network-access Disabled \
+ --disable-ttl True
```
-If `--public-network-access` is not set, restored account is accessible from public network. Please ensure to pass `Disabled` to the `--public-network-access` option to prevent public network access for restored account.
+If `--public-network-access` is not set, restored account is accessible from public network. Please ensure to pass `Disabled` to the `--public-network-access` option to prevent public network access for restored account. Setting disable-ttl to to $true ensures TTL is disabled on restored account, and not providing this parameter restores the account with TTL enabled if it was set earlier.
> [!NOTE] > For restoring with public network access disabled, the minimum stable version of azure-cli is 2.52.0.
This command output now shows when a database was created and deleted.
[ { "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/abcd1234-d1c0-4645-a699-abcd1234/restorableSqlDatabases/40e93dbd-2abe-4356-a31a-35567b777220",
- ..
- "name": "40e93dbd-2abe-4356-a31a-35567b777220",
+ "name": "40e93dbd-2abe-4356-a31a-35567b777220",
"resource": { "database": { "id": "db1"
This command output now shows when a database was created and deleted.
"ownerId": "db1", "ownerResourceId": "YuZAAA==" },
- ..
+
}, { "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/abcd1234-d1c0-4645-a699-abcd1234/restorableSqlDatabases/243c38cb-5c41-4931-8cfb-5948881a40ea",
- ..
"name": "243c38cb-5c41-4931-8cfb-5948881a40ea", "resource": { "database": {
This command output now shows when a database was created and deleted.
"ownerId": "spdb1", "ownerResourceId": "OIQ1AA==" },
- ..
+
} ] ```
This command output shows includes list of operations performed on all the conta
```json [ {
- ...
-
"eventTimestamp": "2021-01-08T23:25:29Z", "operationType": "Replace", "ownerId": "procol3", "ownerResourceId": "OIQ1APZ7U18="
-...
}, {
- ...
"eventTimestamp": "2021-01-08T23:25:26Z", "operationType": "Create", "ownerId": "procol3",
az cosmosdb gremlin restorable-resource list \
--restore-location "West US" \ --restore-timestamp "2021-01-10T01:00:00+0000" ```
+This command output shows the graphs which are restorable:
+ ```
-[ {
-```
+[
+ {
"databaseName": "db1",
-"graphNames": [
- "graph1",
- "graph3",
- "graph2"
-]
-```
+"graphNames": [ "graph1", "graph3", "graph2" ]
} ] ```
az cosmosdb table restorable-table list \
--instance-id "abcd1234-d1c0-4645-a699-abcd1234" --location "West US" ```+ ``` [ {
-```
+ "id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/59781d91-682b-4cc2-93a3-c25d03fab159", "name": "59781d91-682b-4cc2-93a3-c25d03fab159", "resource": {
az cosmosdb table restorable-table list \
"ownerId": "table1", "ownerResourceId": "tOdDAKYiBhQ=", "rid": "9pvDGwAAAA=="
-},
-"type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
-```
},
-```
+"type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+ },
+ {"id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/eastus2euap/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785", "name": "2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785", "resource": {
az cosmosdb table restorable-table list \
"rid": "01DtkgAAAA==" }, "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
-```
+ }, ] ```
az cosmosdb table restorable-resource list \
--restore-location "West US" \ --restore-timestamp "2020-07-20T16:09:53+0000" ```+
+Following is the result of the command.
+ ``` { "tableNames": [
-```
"table1", "table3", "table2"
-```
+ ] } ```
Use the following ARM template to restore an account for the Azure Cosmos DB API
"restoreParameters": { "restoreSource": "/subscriptions/2296c272-5d55-40d9-bc05-4d56dc2d7588/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/6a18ecb8-88c2-4005-8dce-07b44b9741df", "restoreMode": "PointInTime",
- "restoreTimestampInUtc": "6/24/2020 4:01:48 AM"
+ "restoreTimestampInUtc": "6/24/2020 4:01:48 AM",
+ "restoreWithTtlDisabled": "true"
} } }
cosmos-db Restore In Account Continuous Backup Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-in-account-continuous-backup-resource-model.md
Title: Resource model for same account restore (preview)
+ Title: Resource model for same account restore
description: Review the required parameters and resource model for the same account(in-account) point-in-time restore feature of Azure Cosmos DB.
Previously updated : 05/08/2023 Last updated : 03/21/2024
-# Resource model for restore in same account for Azure Cosmos DB (preview)
+# Resource model for restore in same account for Azure Cosmos DB
[!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-gremlin-table.md)]
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Last updated 03/30/2024
Vector databases are used in numerous domains and situations across analytical and generative AI, including natural language processing, video and image recognition, recommendation system, search, etc.
-In 2023, a notable trend in software was the integration of AI enhancements, often achieved by incorporating specialized standalone vector databases into existing tech stacks. This article explains what vector databases are, as well as presents an alternative architecture that you might want to consider: using an integrated vector database in the NoSQL or relational database you already use, especially when working with multi-modal data. This approach not only allows you to reduce cost but also achieve greater data consistency, scale, and performance.
+In 2023, a notable trend in software was the integration of AI enhancements, often achieved by incorporating specialized standalone vector databases into existing tech stacks. This article explains what vector databases are, as well as presents an alternative architecture that you might want to consider: using an integrated vector database in the NoSQL or relational database you already use, especially when working with multi-modal data. This approach not only allows you to reduce cost but also achieve greater data consistency, scalability, and performance.
> [!TIP]
-> Data consistency, scale, and performance guarantees are why OpenAI built its ChatGPT service on top of Azure Cosmos DB. You, too, can take advantage of its integrated vector database, as well as its single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. Please consult the [implementation samples](#how-to-implement-integrated-vector-database-functionalities) section of this article and [try](#next-step) the lifetime free tier or one of the free trial options.
+> Data consistency, scalability, and performance are critical for data-intensive applications, which is why OpenAI chose to build the ChatGPT service on top of Azure Cosmos DB. You, too, can take advantage of its integrated vector database, as well as its single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. See [implementation samples](#how-to-implement-integrated-vector-database-functionalities) and [try](#next-step) it for free.
## What is a vector database?
The natively integrated vector database in our NoSQL API will become available i
[30-day Free Trial without Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
-[90-day Free Trial with Azure AI Advantage](ai-advantage.md)
+[90-day Free Trial and up to $6,000 in throughput credits with Azure AI Advantage](ai-advantage.md)
> [!div class="nextstepaction"] > [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB for MongoDB lifetime free vCore cluster](mongodb/vcore/free-tier.md)
+ ## More Vector Databases - [Azure PostgreSQL Server pgvector Extension](../postgresql/flexible-server/how-to-use-pgvector.md)
cost-management-billing Onboard Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/onboard-microsoft-customer-agreement.md
Previously updated : 12/15/2023 Last updated : 04/03/2024 -+ # Onboard to the Microsoft Customer Agreement (MCA)
-This playbook (guide) helps customers who buy Microsoft software and services through a Microsoft account manager set up an MCA. The guide was created to recommend best practices to onboard you to an MCA.
+This playbook (guide) helps customers who buy Microsoft software and services through a Microsoft account manager to set up an MCA. The guide was created to recommend best practices to onboard you to an MCA.
The onboarding processes and important considerations vary, depending on whether you are: -- New to MCA and have never signed an MCA contract but may have bought Azure and per-seat products using another method, such as licensing vehicle or contracting type.
+- New to MCA and didn't already sign an MCA contract but might have bought Azure and per device or user products using another method, such as licensing vehicle or contracting type.
-Or-
This guide follows each path and provides information for each step of the proce
- **[Enterprise Agreement (EA)](https://www.microsoft.com/en-us/licensing/licensing-programs/enterprise)** - A licensing agreement designed for large organizations with 500 or more users or devices. It's a volume licensing program that gives organizations the flexibility to buy Azure or seat-based cloud services and software licenses under one agreement. - **Microsoft Customer Agreement (MCA)** - A Microsoft licensing agreement designed for automated processing, dynamic updating of terms, transparent pricing, and enhanced billing management capabilities.-- **Pay-as-you-go (PAYG)** ΓÇô A utility computing billing method that's used in cloud computing and geared towards organizations and end users. PAYG is a pricing option where you pay for the resources you use on an hourly or monthly basis. You only pay for what you use and can scale up or down as needed.
+- **Pay-as-you-go (PAYG)** ΓÇô A utility computing billing method used in cloud computing and geared towards organizations and end users. Pay-as-you-go is a pricing option where you pay for the resources you use on an hourly or monthly basis. You only pay for what you use and can scale up or down as needed.
- **APIs** - A software intermediary that allows two applications to interact with each other. For example, it defines the kinds of calls or requests that can be made, how to make them, the data formats that should be used, and the conventions to follow. - **Power BI** - A suite of Microsoft data visualization tools used to deliver insights throughout organizations.
The [MCA](https://www.microsoft.com/Licensing/how-to-buy/microsoft-customer-agre
The MCA has several benefits that can improve your invoice process, billing operations, and overall cost management including:
-Simplified purchasing with **fast and fully automated** access to Azure and per-seat licenses
+Simplified purchasing with **fast and fully automated** access to Azure and per device or user licenses
- A single, short agreement that doesn't expire and can be digitally signed - Allows you to complete a purchase and start using Azure right away - No upfront costs required with pay-as-you-go billing for most services - Buy only what you need when you need it and negotiate commitments when desired-- Per-seat subscriptions allow you to easily manage and track your organization's software usage
+- You to easily manage and track your organization's software usage with per device or per user subscriptions
Improved billing experience with **intuitive invoices** - Intuitive invoice layout displays charges in an easy-to-read format, making expenditures easier to understand
Management, deployment, and optimization tools in a **single portal**
- Manage all your Azure purchases through a single, unified portal at Azure.com - Centrally control user authorizations in a single place with a single set of roles - Integrated cost management capabilities provide enterprise-grade insights into usage with recommendations on how to save money-- Easily manage your per-seat subscriptions for Microsoft licenses through the same portal, streamlining your software management process.
+- Easily manage your per device or user subscriptions for Microsoft licenses through the same portal, streamlining your software management process.
## New MCA Customer This section describes the steps you must take to enable and sign an MCA, which allows you to experience its benefits. >[!NOTE]
-> The following steps apply only to **new MCA customers** that have never signed an MCA or EA but who may have bought Azure or per seat products through another method, such as a licensing vehicle or contracting type. If you're a **customer migrating to MCA from an existing Microsoft EA**, see [Migrate from an EA to transition to an MCA](#migrate-from-an-ea-to-an-mca).
+> The following steps apply only to **new MCA customers** that have never signed an MCA or EA but who might have bought Azure or per device or user products through another method, such as a licensing vehicle or contracting type. If you're a **customer migrating to MCA from an existing Microsoft EA**, see [Migrate from an EA to transition to an MCA](#migrate-from-an-ea-to-an-mca).
Start your journey to MCA by using the steps in the following diagram. More details and supporting links are in the sections that follow the diagram.
You can accelerate proposal creation and contract signature by gathering the fol
- **Company's VAT or Tax ID** - **The primary contact's name, phone number, and email address**
-**The name and email address of the Billing Account Owner** who is the person in your organization that has authorization. They make the initial purchases and sign the MCA. They may or may not be the same person as the signer mentioned previously, depending on your organization's requirements.
+**The name and email address of the Billing Account Owner** who is the person in your organization that has authorization. They make the initial purchases and sign the MCA. They might or might not be the same person as the signer mentioned previously, depending on your organization's requirements.
If your organization has specific requirements for signing contracts such as who can sign, purchasing limits or how many people need to sign, advise your Microsoft account manager in advance.
To become operational includes steps to manage billing accounts, fully understan
Each billing account has at least one billing profile. Your first billing profile is set up when you sign up to use Azure. Users assigned to roles for a billing profile can view cost, set budgets, and can manage and pay invoices. Get an overview of how to [set up and manage your billing account](https://www.youtube.com/watch?v=gyvHl5VNWg4&ab_channel=MicrosoftAzure) and learn about the powerful [billing capabilities](../understand/mca-overview.md).
+For more information, see the following how-to videos:
+
+- [How to organize your Microsoft Customer Agreement Billing Account in the Azure portal](https://www.youtube.com/watch?v=6lmaovgWiZw&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=7)
+- [How to find a copy of your Microsoft Customer Agreement in the Azure portal](https://www.youtube.com/watch?v=SQbKGo8JV74&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=4)
+
+If you're looking for Microsoft 365 admin center video resources, see [Microsoft Customer Agreement Video Tutorials](https://www.microsoft.com/licensing/learn-more/microsoft-customer-agreement/video-tutorials).
+ ### Step 6 ΓÇô Understand your MCA invoice In the billing account for an MCA, an invoice is generated every month for each billing profile. The invoice includes all charges from the previous month organized by invoice sections that you can define. You can view your invoices in the Azure portal and compare the charges to the usage detail files. Learn how the [charges on your invoice](https://www.youtube.com/watch?v=e2LGZZ7GubA&feature) work and take a step-by-step [invoice tutorial](../understand/review-customer-agreement-bill.md).
+For more information, see the [How to find and read your Microsoft Customer Agreement invoices in the Azure portal](https://www.youtube.com/watch?v=xkUkIunP4l8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=5) video.
+ ### Step 7 ΓÇô Get to know MCA features Learn more about features that you can use to optimize your experience and accelerate the value of MCA for your organization.
The following sections help you establish governance for your MCA.
We recommend using billing account roles to manage your billing account on the MCA. These roles are in addition to the built-in Azure roles used to manage resource assignments. Billing account roles are used to manage your billing account, profiles, and invoice sections. Learn how to manage who has [access to your billing account](https://www.youtube.com/watch?v=9sqglBlKkho&ab_channel=AzureCostManagement) and get an overview of [how billing account roles work](../manage/understand-mca-roles.md) in Azure.
+For more information, see the [How to manage access to your Microsoft Customer Agreement in the Azure portal](https://www.youtube.com/watch?v=jh7PUKeAb0M&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=6) video.
+ ### Step 9 ΓÇô Organize your costs and customize billing The MCA provides you with flexibility to organize your costs based on your needs, whether it's by department, project, or development environment. Understand how to [organize your costs](https://www.youtube.com/watch?v=7RxTfShGHwU) and to [customize your billing](../manage/mca-section-invoice.md) to meet your needs.
+For more information, see the [How to optimize your workloads and reduce costs under your Microsoft Customer Agreement](https://www.youtube.com/watch?v=UxO2cFyWn0w&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=3) video.
+ ### Step 10 ΓÇô Evaluate your needs for more tenants
-The MCA allows you to create multi-tenant billing relationships. They let you securely share your billing account with other tenants, while maintaining control over your billing data. If your organization needs multiple tenants, see [Manage billing across multiple tenants](../manage/manage-billing-across-tenants.md).
+The MCA allows you to create multitenant billing relationships. They let you securely share your billing account with other tenants, while maintaining control over your billing data. If your organization needs multiple tenants, see [Manage billing across multiple tenants](../manage/manage-billing-across-tenants.md).
## Manage your new MCA
An Azure subscription is a logical container used to create resources in Azure.
To create a subscription, see Create a [Microsoft Customer Agreement subscription](../manage/create-subscription.md).
+For more information about creating a subscription, see the [How to create an Azure Subscription under your Microsoft Customer Agreement](https://www.youtube.com/watch?v=u5wf8KMD_M8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=8) video.
+
+If you're looking for Microsoft 365 admin center video resources, see [Microsoft Customer Agreement Video Tutorials](https://www.microsoft.com/licensing/learn-more/microsoft-customer-agreement/video-tutorials).
+ ## Migrate from an EA to an MCA This section of the onboarding guide describes the steps you follow to migrate from an EA to an MCA. Although the steps in this section are like those in the previous [New MCA customer](#new-mca-customer) section, there are important differences called out throughout this section.
This section of the onboarding guide describes the steps you follow to migrate f
The following points help you plan for your migration from EA to MCA: - Migrating from EA to MCA redirects your charges from your EA enrollment to your MCA billing account after you complete the subscription migration. The change goes into effect immediately. Any charges incurred up to the point of migration are invoiced to the EA and must be settled on that enrollment. There's no effect on your services and no downtime.-- You can continue to see your historic charges in the Azure portal under your EA enrollment billing scope.-- Depending on the timing of your migration, you may receive two invoices, one EA and one MCA, in the transition month. The MCA invoice covers usage for a calendar month and is generated from the fifth to the seventh day of the month following the usage.
+- You can continue to see your historic charges in the Azure portal under your EA enrollment billing scope. Historical charges aren't visible in cost analysis when migration completes if you're an Account owner or a subscription owner without access to view the EA billing scope. We recommend that you [download your cost and usage data and invoices](../understand/download-azure-daily-usage.md) before you transfer subscriptions.
+- Depending on the timing of your migration, you might receive two invoices, one EA and one MCA, in the transition month. The MCA invoice covers usage for a calendar month and is generated from the fifth to the seventh day of the month following the usage.
- To ensure your MCA invoice gets received by the right person or group, you must add an accounts payable email address as an invoice recipient's contact to the MCA. For more information, see [share your billing profiles invoice](../understand/download-azure-invoice.md#share-your-billing-profiles-invoice). - If you use Cost Management APIs for reporting purposes, familiarize yourself with [Other actions to manage your MCA](#other-actions-to-manage-your-mca). - Be sure to alert your accounts payable team of the important change to your invoice. You get a final EA invoice and start receiving a new monthly MCA invoice.
You can accelerate proposal creation and contract signature by gathering the fol
- **Company's VAT or Tax ID.** - **The primary contact's name, phone number and email address.**
-**The name and email address of the Billing Account Owner** who is the person in your organization that has authorization and signs the MCA and who makes the initial purchases. They may or may not be the same person as the signer mentioned previously, depending on your organization's requirements.
+**The name and email address of the Billing Account Owner** who is the person in your organization that has authorization and signs the MCA and who makes the initial purchases. They might or might not be the same person as the signer mentioned previously, depending on your organization's requirements.
If your organization has specific requirements for signing contracts such as who can sign, purchasing limits or how many people need to sign, advise your Microsoft account manager in advance.
Becoming operational includes steps to manage billing accounts, fully understand
Each billing account has at least one billing profile. Your first billing profile is set up when you sign up to use Azure. Users assigned to roles for a billing profile can view cost, set budgets, and manage and pay invoices. Get an overview of how to [set up and manage your billing account](https://www.youtube.com/watch?v=gyvHl5VNWg4&ab_channel=MicrosoftAzure) and learn about the powerful [billing capabilities](../understand/mca-overview.md).
+For more information, see the following how-to videos:
+
+- [How to organize your Microsoft Customer Agreement Billing Account in the Azure portal](https://www.youtube.com/watch?v=6lmaovgWiZw&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=7)
+- [How to find a copy of your Microsoft Customer Agreement in the Azure portal](https://www.youtube.com/watch?v=SQbKGo8JV74&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=4)
+
+If you're looking for Microsoft 365 admin center video resources, see [Microsoft Customer Agreement Video Tutorials](https://www.microsoft.com/licensing/learn-more/microsoft-customer-agreement/video-tutorials).
+ ### Step 6 - Understand your MCA invoice In the billing account for an MCA, an invoice is generated every month for each billing profile. The invoice includes all charges from the previous month organized by invoice sections that you can define. You can view your invoices in the Azure portal and compare the charges to the usage detail files. Learn how the [charges on your invoice](https://www.youtube.com/watch?v=e2LGZZ7GubA&feature) work and take a step-by-step [invoice tutorial](../understand/review-customer-agreement-bill.md).
In the billing account for an MCA, an invoice is generated every month for each
>[!IMPORTANT] > Bank remittance details for your new MCA will differ from those for your old EA. Use the remittance information at the bottom of your MCA invoice. For more information, see [Bank details used to send wire transfers](../understand/pay-bill.md#bank-details-used-to-send-wire-transfer-payments).
+For more information, see the [How to find and read your Microsoft Customer Agreement invoices in the Azure portal](https://www.youtube.com/watch?v=xkUkIunP4l8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=5) video.
+ ### Step 7 ΓÇô Get to know MCA features Learn more about features that can use to optimize your experience and accelerate the value of MCA for your organization.
Use the following steps to establish governance for your MCA.
We recommend using billing account roles to manage your billing account on the MCA. These roles are in addition to the built-in Azure roles used to manage resource assignments. Billing account roles are used to manage your billing account, profiles, and invoice sections. Learn how to manage who has [access to your billing account](https://www.youtube.com/watch?v=9sqglBlKkho&ab_channel=AzureCostManagement) and get an overview of [how billing account roles work](../manage/understand-mca-roles.md) in Azure.
+For more information, see the [How to manage access to your Microsoft Customer Agreement in the Azure portal](https://www.youtube.com/watch?v=jh7PUKeAb0M&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=6) video.
+ ### Step 9 - Organize your costs and customize billing The MCA provides you with flexibility to organize your costs based on your needs whether it's by department, project, or development environment. Understand how to [organize your costs](https://www.youtube.com/watch?v=7RxTfShGHwU) and to [customize your billing](../manage/mca-section-invoice.md) to meet your needs.
+For more information, see the [How to optimize your workloads and reduce costs under your Microsoft Customer Agreement](https://www.youtube.com/watch?v=UxO2cFyWn0w&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=3) video.
+ ### Step 10 - Evaluate your needs for more tenants
-The MCA allows you to create multi-tenant billing relationships. They let you securely share your billing account with other tenants, while maintaining control over your billing data. If your organization needs multiple tenants, see [Manage billing across multiple tenants](../manage/manage-billing-across-tenants.md).
+The MCA allows you to create multitenant billing relationships. They let you securely share your billing account with other tenants, while maintaining control over your billing data. If your organization needs multiple tenants, see [Manage billing across multiple tenants](../manage/manage-billing-across-tenants.md).
## Manage your MCA after migration
Transition the billing ownership from your old agreement to your new one.
For more information, see [Cost Management + Billing frequently asked questions](../cost-management-billing-faq.yml).
+For more information about creating a subscription, see the [How to create an Azure Subscription under your Microsoft Customer Agreement](https://www.youtube.com/watch?v=u5wf8KMD_M8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=8) video.
+
+If you're looking for Microsoft 365 admin center video resources, see [Microsoft Customer Agreement Video Tutorials](https://www.microsoft.com/licensing/learn-more/microsoft-customer-agreement/video-tutorials).
+ ## Other actions to manage your MCA
-The MCA provides more features for automation, reporting, and billing optimization for multiple tenants. These features may not be applicable to all customers; however, for those customers who need more reporting and automation, these features offer significant benefits. Review the following steps if necessary:
+The MCA provides more features for automation, reporting, and billing optimization for multiple tenants. These features might not be applicable to all customers; however, for those customers who need more reporting and automation, these features offer significant benefits. Review the following steps if necessary:
### Migrating APIs
If you need more support, use your standard support contacts, such as:
- Your Microsoft account manager. - Access [Microsoft support](https://portal.azure.com/#view/Microsoft_Azure_Support/NewSupportRequestV3Blade) in the Azure portal.
+## MCA how-to videos
+
+The following videos provide more information about how to manage your MCA:
+
+- [Faster, Simpler Purchasing with the Microsoft Customer Agreement](https://www.youtube.com/watch?v=nhpIbhqojWE&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=2)
+- [How to optimize your workloads and reduce costs under your Microsoft Customer Agreement](https://www.youtube.com/watch?v=UxO2cFyWn0w&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=3)
+- [How to find a copy of your Microsoft Customer Agreement in the Azure portal](https://www.youtube.com/watch?v=SQbKGo8JV74&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=4)
+- [How to find and read your Microsoft Customer Agreement invoices in the Azure portal](https://www.youtube.com/watch?v=xkUkIunP4l8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=5)
+- [How to manage access to your Microsoft Customer Agreement in the Azure portal](https://www.youtube.com/watch?v=jh7PUKeAb0M&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=6)
+- [How to organize your Microsoft Customer Agreement Billing Account in the Azure portal](https://www.youtube.com/watch?v=6lmaovgWiZw&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=7)
+- [How to create an Azure Subscription under your Microsoft Customer Agreement](https://www.youtube.com/watch?v=u5wf8KMD_M8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=8)
+- [How to manage your subscriptions and organize your account in the Microsoft 365 admin center](https://www.youtube.com/watch?v=NO25_5QXoy8&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=9)
+- [How to find a copy of your Microsoft Customer Agreement in the Microsoft 365 admin center (MAC)](https://www.youtube.com/watch?v=pIe5yHljdcM&list=PLC6yPvO9Xb_fRexgBmBeILhzxdETFUZbv&index=10)
+ ## Next steps - [View and download your Azure invoice](../understand/download-azure-invoice.md)
cost-management-billing Choose Commitment Amount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/choose-commitment-amount.md
Software costs aren't covered by savings plans. For more information, see [Softw
## Savings plan purchase recommendations
-Savings plan purchase recommendations are calculated by analyzing your hourly usage data over the last 7, 30, and 60 days. Azure simulates what your costs would have been if you had a savings plan and compares it with your actual pay-as-you-go costs incurred over the time duration. The commitment amount that maximizes your savings is recommended. To learn more about how recommendations are generated, see [How hourly commitment recommendations are generated](purchase-recommendations.md#how-hourly-commitment-recommendations-are-generated).
+Savings plan purchase recommendations are calculated by analyzing your hourly usage data over the last 7, 30, and 60 days. Azure simulates what your costs would have been if you had a savings plan and compares it with your actual pay-as-you-go costs incurred over the time duration. The commitment amount that maximizes your savings is recommended. To learn more about how recommendations are generated, see [How savings plan recommendations are generated](purchase-recommendations.md#how-savings-plan-recommendations-are-generated).
For example, you might incur about $500 in hourly pay-as-you-go compute charges most of the time, but sometimes usage spikes to $700. Azure determines your total costs (hourly savings plan commitment plus pay-as-you-go charges) if you had either a $500/hour or a $700/hour savings plan. Since the $700 usage is sporadic, the recommendation calculation is likely to determine that a $500 hourly commitment provides greater total savings. As a result, the $500/hour plan would be the recommended commitment.
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
Last updated 11/17/2023
Azure savings plan purchase recommendations are provided through [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/Cost), the savings plan purchase experience in [Azure portal](https://portal.azure.com/), and through the [Savings plan benefit recommendations API](/rest/api/cost-management/benefit-recommendations/list).
-## How hourly commitment recommendations are generated
+## How savings plan recommendations are generated
-The goal of our savings plan recommendation is to help you make the most cost-effective commitment. Calculations are based on your actual on-demand costs, and don't include usage covered by existing reservations or savings plans.
+The goal of our savings plan recommendation is to help you make the most cost-effective commitment. Saving plan recommendations are generated using your actual on-demand usage and costs (including any negotiated on-demand discounts).
-We start by looking at your hourly and total on-demand usage costs incurred from savings plan-eligible resources in the last 7, 30, and 60 days. These costs are inclusive of any negotiated discounts that you have. We then run hundreds of simulations of what your total cost would have been if you had purchased either a one or three-year savings plan with an hourly commitment equivalent to your hourly costs.
+We start by looking at your hourly and total on-demand usage costs incurred from savings plan-eligible resources in the last 7, 30, and 60 days. We determine what the optimal savings plan commitment would have been for each of these hours - this is done by applying the appropriate savings plan discounts to all your savings plan-eligible usage in each hour. We consider each one of these commitments a candidate for a savings plan recommendation. We then run hundreds of simulations using each of these candidates to determine what your total cost would have been if you had purchased a savings plan equal to the candiate.
-As we simulate each candidate recommendation, some hours will result in savings. For example, when savings plan-discounted usage plus the hourly commitment less than that hourΓÇÖs historic on-demand charge. In other hours, no savings would be realized. For example, when discounted usage plus the hourly commitment is greater than or greater than on-demand charges. We sum up the simulated hourly charges for each candidate and compare it to your actual total on-demand charge. Only candidates that result in savings are eligible for consideration as recommendations. We also calculate the percentage of your compute usage costs that would be covered by the recommendation, plus any other previously purchased reservations or savings plan.
+Here's a video that explains how savings plan recommendations are generated.
-Finally, we present a differentiated set of one-year and three-year recommendations (currently up to 10 each). The recommendations provide the greatest savings across different compute coverage levels. The recommendations with the greatest savings for one year and three years are the highlighted options.
+>[!VIDEO https://www.youtube.com/embed/4HV9GT9kX6A]
-To account for scenarios where there were significant reductions in your usage, including recently decommissioned services, we run more simulations using only the last three days of usage. The lower of the three day and 30-day recommendations are highlighted, even in situations where the 30-day recommendation may appear to provide greater savings. The lower recommendation is to ensure that we don't encourage overcommitment based on stale data.
+The goal of these simulations is to compare each candidate's total cost ((hourly commitment * 24 hours * # of days in simulation period) + total on-demand cost incurred during the simulation period) to the actual total on-demand costs. Only candidates that result in net savings are eligible for consideration as actual recommendations. We take up to 10 of the best recommendations and present them to you. For each recommendation, we also calculate the percentage of your compute usage costs would now be covered by this savings plan, and any other previously purchased reservations or savings plan. The recommendations with the greatest savings for one year and three years are the highlighted options.
+
+To account for scenarios where there were significant reductions in your usage, including recently decommissioned services, we run more simulations using only the last three days of usage. The lower recommendation (between the three day and 30-day recommendations) is shared, even in situations where the 30-day recommendation may appear to provide greater savings. This is done to ensure that we don't inadvertently recommend overcommitment based on stale data.
Note the following points: - Recommendations are refreshed several times a day.-- The recommended quantity for a scope is reduced on the same day that you purchase a savings plan for the scope. However, an update for the savings plan recommendation across scopes can take up to 25 days.
+- The savings plan recommendation for a specific scope is reduced on the same day that you purchase a savings plan for that scope. However, updates to recommendations for other scopes can take up to 25 days.
- For example, if you purchase based on shared scope recommendations, the single subscription scope recommendations can take up to 25 days to adjust down. ## Recommendations in Azure Advisor When available, a savings plan purchase recommendation can also be found in Azure Advisor. While we may generate up to 10 recommendations, Azure Advisor only surfaces the single three-year recommendation with the greatest savings for each billing subscription. Keep the following points in mind: -- If you want to see recommendations for a one-year term or for other scopes, navigate to the savings plan purchase experience in Azure portal. For example, enrollment account, billing profile, resource groups, and so on. For more information, see [Who can buy a savings plan](buy-savings-plan.md#who-can-buy-a-savings-plan).-- Recommendations available in Advisor currently only consider your last 30 days of usage.-- Recommendations are for three-year savings plans.-- If you recently purchased a savings plan, Advisor reservation purchase and Azure saving plan recommendations can take up to five days to disappear.
+- If you want to see recommendations for a one-year term or for other scopes, navigate to the savings plan purchase experience in Azure portal. For example, enrollment account, billing profile, resource groups, and so on. For more information, see [Who can buy a savings plan](buy-savings-plan.md#who-can-buy-a-savings-plan).
+- Recommendations in Advisor currently only consider your last 30 days of usage.
+- Recommendations in Advisor are only for three-year savings plans.
+- If you recently purchased a savings plan or reserved instance, it can take up to 5 days for the purchase(s) to impact your recommendations in Advisor and Azure portal.
## Purchase recommendations in the Azure portal When available, up to 10 savings plan commitment recommendations can be found in the savings plan purchase experience in Azure portal. For more information, see [Who can buy a savings plan](buy-savings-plan.md#who-can-buy-a-savings-plan). Each recommendation includes the commitment amount, the estimated savings percentage (off your current pay-as-you-go costs) and the percentage of your compute usage costs that would be covered by this and any other previously purchased savings plans and reservations.
-By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and billing account for EA). You can also view separate subscription and resource group-level recommendations by changing benefit application to one of those levels.
+By default, the recommendations are for the entire billing scope (billing profile for MCA and enrollment account for EA). You can also view separate subscription and resource group-level recommendations by changing benefit application to one of those levels. We don't currently support management group-level recommendations.
-Recommendations are term-specific, so you'll see the one-year or three-year recommendations at each level by toggling the term options. We don't currently support management group-level recommendations.
+Recommendations are term-specific, so you'll see the one-year or three-year recommendations at each level by toggling the term options.
-The highlighted recommendation is projected to result in the greatest savings. The other values allow you to see how increasing or decreasing your commitment could affect both your savings. They also show how much of your total compute usage cost would be covered by savings plans or reservation commitments. When the commitment amount is increased, your savings could be reduced because you may end up with lower utilization each hour. If you lower the commitment, your savings could also be reduced. In this case, although you'll likely have greater utilization each hour, there will likely be other hours where your savings plan won't fully cover your usage. Usage beyond your hourly commitment is charged at the more expensive pay-as-you-go rates.
+The highlighted recommendation is projected to result in the greatest savings. The other values allow you to see how increasing or decreasing your commitment could affect both your savings. They also show how much of your total compute usage cost would be covered by savings plans or reservation commitments. When the commitment amount is increased, your savings may decline because you have lower utilization each hour. If you lower the commitment, your savings could also be reduced. In this case, although you'll have greater utilization, there will be more hours where your savings plan won't fully cover your usage. Usage beyond your hourly commitment is charged at the more expensive pay-as-you-go rates.
## Purchase recommendations with REST API
For more information about retrieving savings plan commitment recommendations, s
## Reservation trade in recommendations
-When you trade one or more reservations for a savings plan, you're shifting the balance of your previous commitments to a new savings plan commitment. For example, if you have a one-year reservation with a value of $500, and halfway through the term you look to trade it for a savings plan, you would still have an outstanding commitment of about $250.
-
-The minimum hourly commitment must be at least equal to the outstanding amount divided by (24 times the term length in days).
-
-As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 times the term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of about $0.029 for a new one-year savings plan.
-
-If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan is used to cover usage of eligible resources.
+When you trade one or more reservations for a savings plan, you're shifting the balance of your previous commitments to a new savings plan commitment. For example, if you have a one-year reservation with a value of $500, and halfway through the term you look to trade it for a savings plan, you will still have an outstanding commitment of about $250. The minimum hourly commitment must be at least equal to the outstanding amount divided by (24 * the term length in days).
-The minimum value doesn't necessarily represent the hourly commitment necessary to cover the resources that were covered by the exchanged reservation. If you want to cover those resources, you'll most likely have to increase the hourly commitment. To determine the appropriate hourly commitment:
+As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 times the term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of about $0.029 for a new one-year savings plan. If you're trading multiple reservations, the total outstanding commitment is used. You may choose to increase the value, but you can't decrease it.
-1. Download your price list.
-2. For each reservation order you're returning, find the product in the price sheet and determine its unit price under either a one-year or three-year savings plan (filter by term and price type).
-3. Multiply unit price by the number of instances that are being returned. The result gives you the total hourly commitment required to cover the product with your savings plan.
-4. Repeat for each reservation order to be returned.
-5. Sum the values and enter the total as the hourly commitment.
+The minimum value doesn't necessarily represent the hourly commitment necessary to cover the resources that were covered by the exchanged reservation. If you want to cover those resources, you'll most likely have to increase the hourly commitment. To determine the appropriate hourly commitment, see [Determine savings plan commitment needed to replace your reservation](reservation-trade-in.md#determine-savings-plan-commitment-needed-to-replace-your-reservation).
## Next steps
cost-management-billing Renew Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/renew-savings-plan.md
# Automatically renew your Azure savings plan You can automatically purchase a replacement savings plan when an existing savings plan expires. Automatic renewal provides an effortless way to continue getting savings plan discounts without having to closely monitor a savings plan's expiration. The renewal setting is turned off by default. Enable or disable the renewal setting anytime, up to the expiration of the existing savings plan.- Renewing a savings plan creates a new savings plan when the existing one expires. It doesn't extend the term of the existing savings plan.
-You can opt in to automatically renew at any time.
-
-There's no obligation to renew and you can opt out of the renewal at any time before the existing savings plan expires.
- ## Required renewal permissions The following conditions are required to renew a savings plan:
-For Enterprise Agreements (EA) and Microsoft Customer Agreements (MCA):
+Billing admin For Enterprise Agreements (EA) and Microsoft Customer Agreements (MCA):
+- You must be either a Billing profile owner or Billing profile contributor of an MCA account
+- You must be an EA administrator with write access of an EA account
+- You must be a Savings plan purchaser
-- MCA - You must be a billing profile contributor-- EA - You must be an EA admin with write access For Microsoft Partner Agreements (MPA):- - You must be an owner of the existing savings plan.-- You must be an owner of the subscription if the savings plan is scoped to a single subscription or resource group.-- You must be an owner of the subscription if it has a shared scope or management group scope.
+- You must be an owner of the subscription.
## Set up renewal
In the Azure portal, search for **Savings plan** and select it.
## If you don't automatically renew
-Your services continue to run normally. You're charged pay-as-you-go rates for your usage after the savings plan expires. If the savings plan wasn't set for automatic renewal before expiration, you can't renew an expired savings plan. To continue to receive savings, you can buy a new savings plan.
+Your services continue to run normally. You're charged pay-as-you-go rates for your usage after the savings plan expires. You can't renew an expired savings plan - to continue to receive savings, you can buy a new savings plan.
## Default renewal settings
-By default, the renewal inherits all properties except automatic renewal setting from the expiring savings plan. A savings plan renewal purchase has the same billing subscription, term, billing frequency, and savings plan commitment.
-
-However, you can update the renewal commitment, billing frequency, and commitment term to optimize your savings.
+By default, the renewal inherits all properties except automatic renewal setting from the expiring savings plan. A savings plan renewal purchase has the same billing subscription, term, billing frequency, and savings plan commitment. The new savings plan inherits the scope setting from the expiring savings plan during renewal.
+However, you can explicitly set the hourly commitment, billing frequency, and commitment term to optimize your savings.
## When the new savings plan is purchased- A new savings plan is purchased when the existing savings plan expires. We try to prevent any delay between the two savings plan. Continuity ensures that your costs are predictable, and you continue to get discounts. ## Change parent savings plan after setting renewal
If you make any of the following changes to the expiring savings plan, the savin
- Transferring the savings plan from one account to another - Renew the enrollment
-The new savings plan inherits the scope setting from the expiring savings plan during renewal.
## New savings plan permissions
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
Apart from [Azure Virtual Machines](https://azure.microsoft.com/pricing/details/
> > You may [trade-in](reservation-trade-in.md) your Azure compute reservations for a savings plan or may continue to use and purchase reservations for those predictable, stable workloads where the specific configuration need is known. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).ΓÇï
-Although compute reservation exchanges become unavailable at the end of the grace period, noncompute reservation exchanges are unchanged. You're able to continue to trade-in reservations for saving plans.ΓÇï
+Although compute reservation exchanges become unavailable at the end of the grace period, noncompute reservation exchanges are unchanged. You're able to continue to trade-in reservations for saving plans.ΓÇï To trade-in reservation(s) for a savings plan, you must meet the following criteria:
- You must have owner access on the Reservation Order to trade in an existing reservation. You can [Add or change users who can manage a savings plan](manage-savings-plan.md#who-can-manage-a-savings-plan).-- To trade-in a reservation for a savings plan, you must have Azure RBAC Owner permission on the subscription you plan to use to purchase a savings plan.
+- You must have the Savings plan purchaser role, or Owner permission on the subscription you plan to use to purchase the savings plan.
- EA Admin write permission or Billing profile contributor and higher, which are Cost Management + Billing permissions, are supported only for direct Savings plan purchases. They can't be used for savings plans purchases as a part of a reservation trade-in.-- The new savings plan's lifetime commitment should equal or be greater than the returned reservation's remaining commitment. Example: for a three-year reservation that's $100 per month and exchanged after the 18th payment, the new savings plan's lifetime commitment should be $1,800 or more (paid monthly or upfront).-- Microsoft isn't currently charging early termination fees for reservation trade ins. We might charge the fees made in the future. We currently don't have a date for enabling the fee.+
+The new savings plan's lifetime commitment must equal or be greater than the returned reservation(s)'s remaining commitment. Example: for a three-year reservation that's $100 per month and exchanged after the 18th payment, the new savings plan's lifetime commitment should be $1,800 or more (paid monthly or upfront).
+
+Microsoft isn't currently charging early termination fees for reservation trade ins. We might charge the fees made in the future. We currently don't have a date for enabling the fee.
## How to trade in an existing reservation
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
These functions are useful inside conditions, they can be used to evaluate any t
| Math function | Task | | - | - | | [add](control-flow-expression-language-functions.md#add) | Return the result from adding two numbers. |
-| [div](control-flow-expression-language-functions.md#div) | Return the result from dividing two numbers. |
+| [div](control-flow-expression-language-functions.md#div) | Return the result from dividing one number by another number. |
| [max](control-flow-expression-language-functions.md#max) | Return the highest value from a set of numbers or an array. | | [min](control-flow-expression-language-functions.md#min) | Return the lowest value from a set of numbers or an array. |
-| [mod](control-flow-expression-language-functions.md#mod) | Return the remainder from dividing two numbers. |
+| [mod](control-flow-expression-language-functions.md#mod) | Return the remainder from dividing one number by another number. |
| [mul](control-flow-expression-language-functions.md#mul) | Return the product from multiplying two numbers. | | [rand](control-flow-expression-language-functions.md#rand) | Return a random integer from a specified range. | | [range](control-flow-expression-language-functions.md#range) | Return an integer array that starts from a specified integer. |
-| [sub](control-flow-expression-language-functions.md#sub) | Return the result from subtracting the second number from the first number. |
+| [sub](control-flow-expression-language-functions.md#sub) | Return the result from subtracting one number from another number. |
## Function reference
And returns this result: `"https://contoso.com"`
### div
-Return the integer result from dividing two numbers.
-To get the remainder result, see [mod()](#mod).
+Return the result of dividing one number by another number.
``` div(<dividend>, <divisor>) ```
+The precise return type of the function depends on the types of its parameters &mdash; see examples for detail.
+ | Parameter | Required | Type | Description | | | -- | - | -- | | <*dividend*> | Yes | Integer or Float | The number to divide by the *divisor* |
-| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but cannot be 0 |
+| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*. A *divisor* value of zero causes an error at runtime. |
||||| | Return value | Type | Description | | | - | -- |
-| <*quotient-result*> | Integer | The integer result from dividing the first number by the second number |
+| <*quotient-result*> | Integer or Float | The result of dividing the first number by the second number |
||||
-*Example*
+*Example 1*
-Both examples divide the first number by the second number:
+These examples divide the number 9 by 2:
```
-div(10, 5)
-div(11, 5)
+div(9, 2.0)
+div(9.0, 2)
+div(9.0, 2.0)
```
-And return this result: `2`
+And all return this result: `4.5`
+
+*Example 2*
+
+This example also divides the number 9 by 2, but because both parameters are integers the remainder is discarded (integer division):
+
+```
+div(9, 2)
+```
+
+The expression returns the result `4`. To obtain the value of the remainder, use the [mod()](#mod) function.
<a name="encodeUriComponent"></a>
And return this result: `1`
### mod
-Return the remainder from dividing two numbers.
-To get the integer result, see [div()](#div).
+Return the remainder from dividing one number by another number. For integer division, see [div()](#div).
``` mod(<dividend>, <divisor>)
mod(<dividend>, <divisor>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*dividend*> | Yes | Integer or Float | The number to divide by the *divisor* |
-| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but cannot be 0. |
+| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*. A *divisor* value of zero causes an error at runtime. |
||||| | Return value | Type | Description |
mod(<dividend>, <divisor>)
*Example*
-This example divides the first number by the second number:
+This example calculates the remainder when the first number is divided by the second number:
``` mod(3, 2) ```
-And return this result: `1`
+And returns this result: `1`
<a name="mul"></a>
mul(<multiplicand1>, <multiplicand2>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*multiplicand1*> | Yes | Integer or Float | The number to multiply by *multiplicand2* |
-| <*multiplicand2*> | Yes | Integer or Float | The number that multiples *multiplicand1* |
+| <*multiplicand2*> | Yes | Integer or Float | The number that multiplies *multiplicand1* |
||||| | Return value | Type | Description |
mul(<multiplicand1>, <multiplicand2>)
*Example*
-These examples multiple the first number by the second number:
+These examples multiply the first number by the second number:
``` mul(1, 2)
And returns this result: `"{ \\"name\\": \\"Sophie Owen\\" }"`
### sub
-Return the result from subtracting the second number from the first number.
+Return the result from subtracting one number from another number.
``` sub(<minuend>, <subtrahend>)
databox-online Azure Stack Edge Gpu 2403 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2403-release-notes.md
+
+ Title: Azure Stack Edge 2403 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2403 release.
++
+
+++ Last updated : 04/03/2024+++
+# Azure Stack Edge 2403 release notes
++
+The following release notes identify critical open issues and resolved issues for the 2403 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2403** release, which maps to software version **3.2.2642.2453**.
+
+> [!Warning]
+> In this release, you must update the packet core version to AP5GC 2308 before you update to Azure Stack Edge 2403. For detailed steps, see [Azure Private 5G Core 2308 release notes](../private-5g-core/azure-private-5g-core-release-notes-2308.md).
+> If you update to Azure Stack Edge 2403 before updating to Packet Core 2308.0.1, you will experience a total system outage. In this case, you must delete and re-create the Azure Kubernetes service cluster on your Azure Stack Edge device.
+> Each time you change the Kubernetes workload profile, you are prompted for the Kubernetes update. Go ahead and apply the update.
+
+## Supported update paths
+
+To apply the 2403 update, your device must be running version 2303 or later.
+
+ - If you aren't running the minimum required version, you see this error:
+
+ *Update package can't be installed as its dependencies aren't met.*
+
+ - You can update to 2303 from 2207 or later, and then update to 2403.
+
+You can update to the latest version using the following update paths:
+
+| Current version of Azure Stack Edge software and Kubernetes | Update to Azure Stack Edge software and Kubernetes | Desired update to 2403 |
+| --| --| --|
+|2207 |2303 |2403 |
+|2209 |2303 |2403 |
+|2210 |2303 |2403 |
+|2301 |2303 |2403 |
+|2303 |Directly to |2403 |
+
+## What's new
+
+The 2403 release has the following new features and enhancements:
+
+- Deprecated support for Azure Kubernetes service telemetry on Azure Stack Edge.
+- Zone-label support for two-node Kubernetes clusters.
+- Hyper-V VM management, memory usage monitoring on Azure Stack Edge host.
+
+## Issues fixed in this release
+
+| No. | Feature | Issue |
+| | | |
+|**1.**| Clustering | Two-node cold boot of the server causes high availability VM cluster resources to come up as offline. Changed ColdStartSetting to AlwaysStart. |
+|**2.**| Marketplace image support | Fixed bug allowing Windows Marketplace image on Azure Stack Edge A and TMA. |
+|**3.**| Network connectivity | Fixed VM NIC link flapping after Azure Stack Edge host power off/on, which can cause VM losing its DHCP IP. |
+|**4.**| Network connectivity |Due to proxy ARP configurations in some customer environments, **IP address in use** check returns false positive even though no endpoint in the network is using the IP. The fix skips the ARP-based VM **IP address in use** check if the IP address is allocated from an internal network managed by Azure Stack Edge. |
+|**5.**| Network connectivity | VM NIC change operation times out after 3 hours, which blocks other VM update operations. On Microsoft Kubernetes clusters, Persistent Volume (PV) dependent pods get stuck. The issue occurs when multiple NICs within a VM are being transferred from a VLAN virtual network to a non-VLAN virtual network. After the fix, the VM NIC change operation times out quickly and the VM update won't be blocked. |
+|**6.**| Kubernetes | Overall two-node Kubernetes resiliency improvements, like increasing memory for control plane for AKS workload cluster, increasing limits for etcd, multi-replica, and hard anti-affinity support for core DNS and Azure disk csi controller pods and improve VM failover times. |
+|**7.**| Compute Diagnostic and Update | Resiliency fixes |
+|**8.**| Security | STIG security fixes for Mariner Guest OS for Azure Kubernetes service on Azure Stack Edge. |
+|**9.**| VM operations | On an Azure Stack Edge cluster that deploys an AP5GC workload, after a host power cycle test, when the host returns a transient error about CPU group configuration, AzSHostAgent would crash. This caused a VM operations failure. The fix made *AzSHostAgent* resilient to a transient CPU group error. |
+
+<!--!## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|AKS... |The AKS Kubernetes... |
+|**2.**|Wi-Fi... |Starting this release... | |-->
+
+## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**| Azure Storage Explorer | The Blob storage endpoint certificate that's autogenerated by the Azure Stack Edge device might not work properly with Azure Storage Explorer. | Replace the Blob storage endpoint certificate. For detailed steps, see [Bring your own certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates). |
+|**2.**| Network connectivity | On a two-node Azure Stack Edge Pro 2 cluster with a teamed virtual switch for Port 1 and Port 2, if a Port 1 or Port 2 link is down, it can take up to 5 seconds to resume network connectivity on the remaining active port. If a Kubernetes cluster uses this teamed virtual switch for management traffic, pod communication may be disrupted up to 5 seconds. | |
+|**3.**| Virtual machine | After the host or Kubernetes node pool VM is shut down, there's a chance that kubelet in node pool VM fails to start due to a CPU static policy error. Node pool VM shows **Not ready** status, and pods won't be scheduled on this VM. | Enter a support session and ssh into the node pool VM, then follow steps in [Changing the CPU Manager Policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#changing-the-cpu-manager-policy) to remediate the kubelet service. |
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command looks like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, might result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error shows as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you might see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You must restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules might require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI might take several seconds to update. |The following scenarios in the local UI might be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you might not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information, see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution doesn't stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you must delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**28.**|AKS Update |The AKS Kubernetes update might fail if one of the AKS VMs isn't running. This issue might be seen in the two-node cluster. |If the AKS update has failed, [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md). Check the state of the Kubernetes VMs by running `Get-VM` cmdlet. If the VM is off, run the `Start-VM` cmdlet to restart the VM. Once the Kubernetes VM is running, reapply the update. |
+|**29.**|Wi-Fi |Wi-Fi functionality for Azure Stack Edge Mini R is deprecated. | |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md).
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Previously updated : 08/04/2023 Last updated : 04/01/2024 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 12/21/2023 Last updated : 04/03/2024 # Update your Azure Stack Edge Pro GPU [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes the steps required to install update on your Azure Stack Edge Pro with GPU via the local web UI and via the Azure portal. You apply the software updates or hotfixes to keep your Azure Stack Edge Pro device and the associated Kubernetes cluster on the device up-to-date.
+This article describes the steps required to install update on your Azure Stack Edge Pro device with GPU via the local web UI and via Azure portal.
+
+Apply the software updates or hotfixes to keep your Azure Stack Edge Pro device and the associated Kubernetes cluster on the device up-to-date.
> [!NOTE] > The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version. ## About latest updates
-The current update is Update 2312. This update installs two updates, the device update followed by Kubernetes updates.
+The current version is Update 2403. This update installs two updates, the device update followed by Kubernetes updates.
The associated versions for this update are: -- Device software version: Azure Stack Edge 2312 (3.2.2510.2000)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2312 (3.2.2510.2000)-- Device Kubernetes workload profile: Other workloads-- Kubernetes server version: v1.26.3-- IoT Edge version: 0.1.0-beta15-- Azure Arc version: 1.13.4-- GPU driver version: 535.104.05-- CUDA version: 12.2
+- Device software version: Azure Stack Edge 2403 (3.2.2642.2453).
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2403 (3.2.2642.2453).
+- Device Kubernetes workload profile: Azure Private MEC.
+- Kubernetes server version: v1.27.8.
+- IoT Edge version: 0.1.0-beta15.
+- Azure Arc version: 1.14.5.
+- GPU driver version: 525.85.12.
+- CUDA version: 12.0.
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2312-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2403-release-notes.md).
-**To apply the 2312 update, your device must be running version 2203 or later.**
+**To apply the 2403 update, your device must be running version 2203 or later.**
-- If you are not running the minimum required version, you'll see this error:
+- If you aren't running the minimum required version, you see this error:
- *Update package cannot be installed as its dependencies are not met.*
+ *Update package can't be installed as its dependencies aren't met.*
-- You can update to 2303 from 2207 or later, and then install 2312.
+- You can update to 2303 from 2207 or later, and then install 2403.
Supported update paths:
-| Current version of Azure Stack Edge software and Kubernetes | Upgrade to Azure Stack Edge software and Kubernetes | Desired update to 2312 |
+| Current version of Azure Stack Edge software and Kubernetes | Upgrade to Azure Stack Edge software and Kubernetes | Desired update to 2403 |
|-|-| |
-| 2207 | 2303 | 2312 |
-| 2209 | 2303 | 2312 |
-| 2210 | 2303 | 2312 |
-| 2301 | 2303 | 2312 |
-| 2303 | Directly to | 2312 |
+| 2207 | 2303 | 2403 |
+| 2209 | 2303 | 2403 |
+| 2210 | 2303 | 2403 |
+| 2301 | 2303 | 2403 |
+| 2303 | Directly to | 2403 |
### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer.
-If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2312.
+If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2403.
-Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2312:
+Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2403:
1. Update your device version to 2303. 1. Update your Kubernetes version to 2210. 1. Update your Kubernetes version to 2303.
-1. Update both device software and Kubernetes to 2312.
+1. Update both device software and Kubernetes to 2403.
-If you are running 2210 or 2301, you can update both your device version and Kubernetes version directly to 2303 and then to 2312.
+If you're running 2210 or 2301, you can update both your device version and Kubernetes version directly to 2303 and then to 2403.
-If you are running 2303, you can update both your device version and Kubernetes version directly to 2312.
+If you're running 2303, you can update both your device version and Kubernetes version directly to 2403.
-In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2312.
+In Azure portal, the process requires two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2403.
-From the local UI, you will have to run each update separately: update the device version to 2303, update Kubernetes version to 2210, update Kubernetes version to 2303, and then the third update gets both the device version and Kubernetes version to 2312.
+From the local UI, you'll have to run each update separately: update the device version to 2303, update Kubernetes version to 2210, update Kubernetes version to 2303, and then the third update gets both the device version and Kubernetes version to 2403.
-Each time you change the Kubernetes profile, you are prompted for the Kubernetes update. Go ahead and apply the update.
+Each time you change the Kubernetes profile, you're prompted for the Kubernetes update. Go ahead and apply the update.
### Updates for a single-node vs two-node
-The procedure to update an Azure Stack Edge is the same whether it is a single-node device or a two-node cluster. This applies both to the Azure portal or the local UI procedure.
+The procedure to update an Azure Stack Edge is the same whether it's a single-node device or a two-node cluster. This applies both to the Azure portal or the local UI procedure.
-- **Single node** - For a single node device, installing an update or hotfix is disruptive and will restart your device. Your device will experience a downtime for the entire duration of the update.
+- **Single node** - For a single node device, installing an update or hotfix is disruptive and restarts your device. Your device will experience a downtime for the entire duration of the update.
-- **Two-node** - For a two-node cluster, this is an optimized update. The two-node cluster might experience short, intermittent disruptions while the update is in progress. We recommend that you shouldn't perform any operations on the device node when update is in progress.
+- **Two-node** - For a two-node cluster, this is an optimized update. The two-node cluster might experience short, intermittent disruptions while the update is in progress. We recommend that you shouldn't perform any operations on the device node when an update is in progress.
- The Kubernetes worker VMs will go down when a node goes down. The Kubernetes master VM will fail over to the other node. Workloads will continue to run. For more information, see [Kubernetes failover scenarios for Azure Stack Edge](azure-stack-edge-gpu-kubernetes-failover-scenarios.md).
+ The Kubernetes worker VMs goes down when a node goes down. The Kubernetes master VM fails over to the other node. Workloads continue to run. For more information, see [Kubernetes failover scenarios for Azure Stack Edge](azure-stack-edge-gpu-kubernetes-failover-scenarios.md).
-Provisioning actions such as creating shares or virtual machines are not supported during update. The update takes about 60 to 75 minutes per node to complete.
+Provisioning actions such as creating shares or virtual machines aren't supported during update. The update takes about 60 to 75 minutes per node to complete.
To install updates on your device, follow these steps:
Each of these steps is described in the following sections.
2. In **Select update server type**, from the dropdown list, choose from Microsoft Update server (default) or Windows Server Update Services.
- If updating from the Windows Server Update Services, specify the server URI. The server at that URI will deploy the updates on all the devices connected to this server.
+ If updating from the Windows Server Update Services, specify the server URI. The server at that URI deploys the updates on all the devices connected to this server.
<!--![Configure updates 2](./media/azure-stack-edge-gpu-install-update/configure-update-server-2.png)-->
Each of these steps is described in the following sections.
## Use the Azure portal
-We recommend that you install updates through the Azure portal. The device automatically scans for updates once a day. Once the updates are available, you see a notification in the portal. You can then download and install the updates.
+We recommend that you install updates through Azure portal. The device automatically scans for updates once a day. Once the updates are available, you see a notification in the portal. You can then download and install the updates.
> [!NOTE] > - Make sure that the device is healthy and status shows as **Your device is running fine!** before you proceed to install the updates.
+Depending on the software version that you're running, install process might differ slightly.
-Depending on the software version that you are running, install process might differ slightly.
--- If you are updating from 2106 to 2110 or later, you will have a one-click install. See the **version 2106 and later** tab for instructions.-- If you are updating to versions prior to 2110, you will have a two-click install. See **version 2105 and earlier** tab for instructions.
+- If you're updating from 2106 to 2110 or later, you'll have a one-click install. See the **version 2106 and later** tab for instructions.
+- If you're updating to versions prior to 2110, you'll have a two-click install. See **version 2105 and earlier** tab for instructions.
### [version 2106 and later](#tab/version-2106-and-later)
Depending on the software version that you are running, install process might di
### [version 2105 and earlier](#tab/version-2105-and-earlier)
-1. When the updates are available for your device, you see a notification in the **Overview** page of your Azure Stack Edge resource. Select the notification or from the top command bar, **Update device**. This will allow you to apply device software updates.
+1. When the updates are available for your device, you see a notification in the **Overview** page of your Azure Stack Edge resource. Select the notification or from the top command bar, **Update device**. This allows you to apply device software updates.
![Software version after update.](./media/azure-stack-edge-gpu-install-update/portal-update-1.png)
Depending on the software version that you are running, install process might di
![Software version after update 6.](./media/azure-stack-edge-gpu-install-update/portal-update-5.png)
-4. After the download is complete, the notification banner updates to indicate the completion. If you chose to download and install the updates, the installation will begin automatically.
+4. After the download is complete, the notification banner updates to indicate the completion. If you chose to download and install the updates, the installation begins automatically.
If you chose to download updates only, then select the notification to open the **Device updates** blade. Select **Install**.
Depending on the software version that you are running, install process might di
![Software version after update 12.](./media/azure-stack-edge-gpu-install-update/portal-update-11.png)
-7. After the restart, the device software will finish updating. After the update is complete, you can verify from the local web UI that the device software is updated. The Kubernetes software version has not been updated.
+7. After the restart, the device software will finish updating. After the update is complete, you can verify from the local web UI that the device software is updated. The Kubernetes software version hasn't been updated.
![Software version after update 13.](./media/azure-stack-edge-gpu-install-update/portal-update-12.png)
-8. You will see a notification banner indicating that device updates are available. Select this banner to start updating the Kubernetes software on your device.
+8. You'll see a notification banner indicating that device updates are available. Select this banner to start updating the Kubernetes software on your device.
![Software version after update 13a.](./media/azure-stack-edge-gpu-install-update/portal-update-13.png)
Do the following steps to download the update from the Microsoft Update Catalog.
![Search catalog.](./media/azure-stack-edge-gpu-install-update/download-update-1.png)
-1. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
+1. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then select **Search**.
- The update listing appears as **Azure Stack Edge Update 2312**.
+ The update listing appears as **Azure Stack Edge Update 2403**.
> [!NOTE] > Make sure to verify which workload you are running on your device [via the local UI](./azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-compute-ips-1) or [via the PowerShell](./azure-stack-edge-connect-powershell-interface.md) interface of the device. Depending on the workload that you are running, the update package will differ.
Do the following steps to download the update from the Microsoft Update Catalog.
| Kubernetes | Local UI Kubernetes workload profile | Update package name | Example Update File | ||--||--|
- | Azure Kubernetes Service | Azure Private MEC Solution in your environment<br><br>SAP Digital Manufacturing for Edge Computing or another Microsoft Partner Solution in your Environment | Azure Stack Edge Update 2312 Kubernetes Package for Private MEC/SAP Workloads | release~ase-2307d.3.2.2380.1632-42623-79365624-release_host_MsKubernetes_Package |
- | Kubernetes for Azure Stack Edge |Other workloads in your environment | Azure Stack Edge Update 2312 Kubernetes Package for Non Private MEC/Non SAP Workloads | \release~ase-2307d.3.2.2380.1632-42623-79365624-release_host_AseKubernetes_Package |
+ | Azure Kubernetes Service | Azure Private MEC Solution in your environment<br><br>SAP Digital Manufacturing for Edge Computing or another Microsoft Partner Solution in your Environment | Azure Stack Edge Update 2403 Kubernetes Package for Private MEC/SAP Workloads | release~ase-2307d.3.2.2380.1632-42623-79365624-release_host_MsKubernetes_Package |
+ | Kubernetes for Azure Stack Edge |Other workloads in your environment | Azure Stack Edge Update 2403 Kubernetes Package for Non Private MEC/Non SAP Workloads | \release~ase-2307d.3.2.2380.1632-42623-79365624-release_host_AseKubernetes_Package |
-1. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has two files for the Kubernetes updates (*Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+1. Select **Download**. There are two packages to download for the update. The first package has two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has two files for the Kubernetes updates (*Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
Prior to the update or hotfix installation, make sure that:
This procedure takes around 20 minutes to complete. Perform the following steps to install the update or hotfix.
-1. In the local web UI, go to **Maintenance** > **Software update**. Make a note of the software version that you are running.
+1. In the local web UI, go to **Maintenance** > **Software update**. Make a note of the software version that you're running.
2. Provide the path to the update file. You can also browse to the update installation file if placed on a network share. Select the two software files (with *SoftwareUpdatePackage.0.exe* and *SoftwareUpdatePackage.1.exe* suffix) together.
This procedure takes around 20 minutes to complete. Perform the following steps
<!--![update device 4](./media/azure-stack-edge-gpu-install-update/local-ui-update-4.png)-->
-4. When prompted for confirmation, select **Yes** to proceed. Given the device is a single node device, after the update is applied, the device restarts and there is downtime.
+4. When prompted for confirmation, select **Yes** to proceed. Given the device is a single node device, after the update is applied, the device restarts and there's downtime.
![update device 5.](./media/azure-stack-edge-gpu-install-update/local-ui-update-5.png)
-5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration.
+5. The update starts. After the device is successfully updated, it restarts. The local UI isn't accessible in this duration.
-6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2312**.
+6. After the restart is complete, you're taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2403**.
-7. You will now update the Kubernetes software version. Select the remaining two Kubernetes files together (file with the *Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe* suffix) and repeat the above steps to apply update.
+7. You'll now update the Kubernetes software version. Select the remaining two Kubernetes files together (file with the *Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe* suffix) and repeat the above steps to apply update.
<!--![Screenshot of files selected for the Kubernetes update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-7.png)-->
This procedure takes around 20 minutes to complete. Perform the following steps
9. When prompted for confirmation, select **Yes** to proceed.
-10. After the Kubernetes update is successfully installed, there is no change to the displayed software in **Maintenance** > **Software update**.
+10. After the Kubernetes update is successfully installed, there's no change to the displayed software in **Maintenance** > **Software update**.
![Screenshot of update device 6.](./media/azure-stack-edge-gpu-install-update/portal-update-17.png) ## Next steps
-Learn more about [administering your Azure Stack Edge Pro](azure-stack-edge-manage-access-power-connectivity-mode.md).
+- Learn more about [administering your Azure Stack Edge Pro](azure-stack-edge-manage-access-power-connectivity-mode.md).
databox-online Azure Stack Edge Gpu Kubernetes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-kubernetes-overview.md
Previously updated : 07/26/2023 Last updated : 04/01/2024
Once the Kubernetes cluster is deployed, then you can manage the applications de
For more information on deploying Kubernetes cluster, go to [Deploy a Kubernetes cluster on your Azure Stack Edge device](azure-stack-edge-gpu-create-kubernetes-cluster.md). For information on management, go to [Use kubectl to manage Kubernetes cluster on your Azure Stack Edge device](azure-stack-edge-gpu-create-kubernetes-cluster.md). -
-### Kubernetes and IoT Edge
-
-This feature has been deprecated. Support will end soon.
-
-All new deployments of IoT Edge on Azure Stack Edge must be on a Linux VM. For detailed steps, see [Deploy IoT runtime on Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md).
- ### Kubernetes and Azure Arc Azure Arc is a hybrid management tool that will allow you to deploy applications on your Kubernetes clusters. Azure Arc also allows you to use Azure Monitor for containers to view and monitor your clusters. For more information, go to [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md). For information on Azure Arc pricing, go to [Azure Arc pricing](https://azure.microsoft.com/services/azure-arc/#pricing).
databox-online Azure Stack Edge Pro 2 Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md
Previously updated : 08/04/2023 Last updated : 04/01/2024 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
In this tutorial, you learn how to:
> * Configure compute > * Get Kubernetes endpoints - ## Prerequisites Before you set up a compute role on your Azure Stack Edge Pro device, make sure that:
databox-online Azure Stack Edge Pro R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 10/14/2022 Last updated : 03/25/2024 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro R so I can use it to transfer data to Azure.
You can add or delete virtual networks associated with your virtual switches. To
## Configure compute IPs
-Follow these steps to configure compute IPs for your Kubernetes workloads.
+After the virtual switches are created, you can enable the switches for Kubernetes compute traffic.
1. In the local UI, go to the **Kubernetes** page.
-1. From the dropdown select a virtual switch that you will use for Kubernetes compute traffic. <!--By default, all switches are configured for management. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.-->
+1. Specify a workload from the options provided.
+ - If you're working with an Azure Private MEC solution, select the option for **an Azure Private MEC solution in your environment**.
+ - If you're working with an SAP Digital Manufacturing solution or another Microsoft partner solution, select the option for **a SAP Digital Manufacturing for Edge Computing or another Microsoft partner solution in your environment**.
+ - For other workloads, select the option for **other workloads in your environment**.
-1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+ If prompted, confirm the option you specified and then select **Apply**.
- - For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of two, free, contiguous IPv4 addresses.
+ To use PowerShell to specify the workload, see detailed steps in [Change Kubernetes workload profiles](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles).
+ ![Screenshot of the Workload selection options on the Kubernetes page of the local UI for two node.](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/azure-stack-edge-kubernetes-workload-selection.png)
- > [!IMPORTANT]
- > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the ```Set-HcsKubeClusterNetworkInfo``` cmdlet from the PowerShell interface of the device. For more information, see Change Kubernetes pod and service subnets. <!--Target URL not available.-->
- > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
- > - If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the ```Set-HcsMacAddressPool``` cmdlet on the PowerShell interface of the device.
+1. From the dropdown list, select the virtual switch you want to enable for Kubernetes compute traffic.
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
-1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+ If you select the **Azure Private MEC solution** or **SAP Digital Manufacturing for Edge Computing or another Microsoft partner** workload option for your environment, you must provide a contiguous range of a minimum of 6 IPv4 addresses (or more) for a 1-node configuration.
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of one IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+ If you select the **other workloads** option for an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 free, contiguous IPv4 addresses.
+ > [!IMPORTANT]
+ > - If you're running **other workloads** in your environment, Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ > - DHCP mode is not supported for Kubernetes node IPs.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. The service IP addresses can be updated later.
+
1. Select **Apply**.
- ![Screenshot of "Advanced networking" page in local UI with fully configured Add virtual switch blade for one node.](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/compute-virtual-switch-1.png)
+ ![Screenshot of Configure compute page in Advanced networking in local UI 2.](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
-1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
1. Select **Next: Web proxy** to configure web proxy.
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
``` > [!NOTE]
- > The artifactName 'CodeAnalysisLogs' is required for integration with Defender for Cloud. For additional tool configuration options, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)
+ > The artifactName 'CodeAnalysisLogs' is required for integration with Defender for Cloud. For additional tool configuration options and environment variables, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)
1. To commit the pipeline, select **Save and run**.
The pipeline will run for a few minutes and save the results.
> [!NOTE] > Install the SARIF SAST Scans Tab extension on the Azure DevOps organization in order to ensure that the generated analysis results will be displayed automatically under the Scans tab.
+## Uploading findings from third-party security tooling into Defender for Cloud
+
+While Defender for Cloud provides the MSDO CLI for standardized functionality and poliy controls across a set of open source security analyzers, you have the flexibility to upload results from other third-party security tooling that you may have configured in CI/CD pipelines to Defender for Cloud for comprehensive code-to-cloud contextualization. All results uploaded to Defender for Cloud must be in standard SARIF format.
+
+First, ensure your Azure DevOps repositories are [onboarded to Defender for Cloud](quickstart-onboard-devops.md). After successfully onboarding, Defender for Cloud continuously monitors the 'CodeAnalysisLogs' artifact for SARIF output.
+
+You can use the 'PublishBuildArtifacts@1' task to ensure SARIF output is published to the correct artifact. For example, if a security analyzer outputs 'results.sarif', you can configure the following task in your job to ensure results are uploaded to Defender for Cloud:
+
+ ```yml
+ - task: PublishBuildArtifacts@1
+ inputs:
+ PathtoPublish: 'results.sarif'
+ ArtifactName: 'CodeAnalysisLogs'
+ ```
+Findings from third-party security tools will appear as 'Azure DevOps repositories should have code scanning findings resolved' assessments associated with the repository the secuirty finding was identified in.
+ ## Learn more - Learn how to [create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
description: Learn how Defender for Cloud can gather information about your mult
- Previously updated : 12/27/2023+ Last updated : 04/07/2024
+#customer intent: As a user, I want to understand how agentless machine scanning works in Defender for Cloud so that I can effectively collect data from my machines.
# Agentless machine scanning
Agentless scanning assists you in the identification process of actionable postu
||| |Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)|
-| Supported use cases:| :::image type="icon" source="./medi) **Only available with Defender for Servers plan 2**|
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
-| Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
+| Supported use cases:| :::image type="icon" source="./medi) **Only available with Defender for Servers plan 2**|
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
+| Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) | | Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK) (preview)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Google-managed encryption key<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Customer-managed encryption key (CMEK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Customer-supplied encryption key (CSEK) |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
**Estimated date for change: May 2024**
-the recommendation ### [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d57a4221-a804-52ca-3dea-768284f06bb7) is set to be deprecated.
+The recommendation [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d57a4221-a804-52ca-3dea-768284f06bb7) is set to be deprecated.
## Deprecating of virtual machine recommendation
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
Title: Dell Edge 5200 (E500) - Microsoft Defender for IoT description: Learn about the Dell Edge 5200 appliance for OT monitoring with Microsoft Defender for IoT. Previously updated : 04/24/2022 Last updated : 04/08/2024
This article describes the Dell Edge 5200 appliance for OT sensors.
|**Hardware profile** | E500| |**Performance** | Max bandwidth: 1 Gbps<br>Max devices: 10,000 | |**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 |
-|**Status** | Supported|
+|**Status** | Supported, available preconfigured |
The following image shows the hardware elements on the Dell Edge 5200 that are used by Defender for IoT:
defender-for-iot Plan Prepare Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/plan-prepare-deploy.md
Title: Prepare an OT site deployment - Microsoft Defender for IoT description: Learn how to prepare for an OT site deployment, including understanding how many OT sensors you'll need, where they should be placed, and how they'll be managed. Previously updated : 02/16/2023 Last updated : 04/08/2024 # Prepare an OT site deployment
defender-for-iot Understand Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/understand-network-architecture.md
Title: Microsoft Defender for IoT and your network architecture - Microsoft Defender for IoT description: Describes the Purdue reference module in relation to Microsoft Defender for IoT to help you understand more about your own OT network architecture. Previously updated : 06/02/2022 Last updated : 04/08/2024
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
Title: Preconfigured appliances for OT network monitoring description: Learn about the appliances available for use with Microsoft Defender for IoT OT sensors and on-premises management consoles. Previously updated : 07/11/2022 Last updated : 04/08/2024
Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to prov
> [!NOTE] > This article also includes information relevant for on-premises management consoles. For more information, see the [Air-gapped OT sensor management deployment path](ot-deploy/air-gapped-deploy.md).
->
+ ## Advantages of pre-configured appliances Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
defender-for-iot Configure Mirror Esxi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-esxi.md
Title: Configure a monitoring interface using an ESXi vSwitch - Sample - Microsoft Defender for IoT description: This article describes traffic mirroring methods with an ESXi vSwitch for OT monitoring with Microsoft Defender for IoT. Previously updated : 09/20/2022 Last updated : 04/08/2024 - # Configure traffic mirroring with a ESXi vSwitch This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
If you need to cancel or delete any DMS task, project, or service, perform the c
az dms project task delete --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name runnowtask ```
-3. To cancel a running project, use the following command:
- ```azurecli
- az dms project task cancel -n runnowtask --project-name PGMigration -g PostgresDemo --service-name PostgresCLI
- ```
-
-4. To delete a running project, use the following command:
+3. To delete a project, use the following command:
```azurecli
- az dms project task delete -n runnowtask --project-name PGMigration -g PostgresDemo --service-name PostgresCLI
+ az dms project delete -n PGMigration -g PostgresDemo --service-name PostgresCLI
```
-5. To delete DMS service, use the following command:
+4. To delete DMS service, use the following command:
```azurecli az dms delete -g ProgresDemo -n PostgresCLI
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 02/28/2024 Last updated : 04/05/2024
Add or remove specific rules your DNS forwarding ruleset as desired, such as:
- A rule to resolve an on-premises zone: internal.contoso.com. - A wildcard rule to forward unmatched DNS queries to a protective DNS service.
+> [!IMPORTANT]
+> The rules shown in this quickstart are examples of rules that can be used for specific scenarios. None of the fowarding rules described in this article are required. Be careful to test your forwarding rules and ensure that the rules don't cause DNS resolution issues.
+ ### Delete a rule from the forwarding ruleset Individual rules can be deleted or disabled. In this example, a rule is deleted.
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 02/28/2024 Last updated : 04/05/2024
$virtualNetworkLink2.ToJsonString()
## Create forwarding rules ++ Create a forwarding rule for a ruleset to one or more target DNS servers. You must specify the fully qualified domain name (FQDN) with a trailing dot. The **New-AzDnsResolverTargetDnsServerObject** cmdlet sets the default port as 53, but you can also specify a unique port. ```Azure PowerShell
In this example:
- 192.168.1.2 and 192.168.1.3 are on-premises DNS servers. - 10.5.5.5 is a protective DNS service.
+> [!IMPORTANT]
+> The rules shown in this quickstart are examples of rules that can be used for specific scenarios. None of the fowarding rules described in this article are required. Be careful to test your forwarding rules and ensure that the rules don't cause DNS resolution issues.
+ ## Test the private resolver You should now be able to send DNS traffic to your DNS resolver and resolve records based on your forwarding rulesets, including:
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
For example, if you have the following rules:
A query for `secure.store.azure.contoso.com` matches the **AzurePrivate** rule for `azure.contoso.com` and also the **Contoso** rule for `contoso.com`, but the **AzurePrivate** rule takes precedence because the prefix `azure.contoso` is longer than `contoso`. > [!IMPORTANT]
-> If a rule is present in the ruleset that has as its destination a private resolver inbound endpoint, do not link the ruleset to the VNet where the inbound endpoint is provisioned. This configuration can cause DNS resolution loops. For example: In the previous scenario, no ruleset link should be added to `myeastvnet` because the inbound endpoint at `10.10.0.4` is provisioned in `myeastvnet` and a rule is present that resolves `azure.contoso.com` using the inbound endpoint.
+> If a rule is present in the ruleset that has as its destination a private resolver inbound endpoint, do not link the ruleset to the VNet where the inbound endpoint is provisioned. This configuration can cause DNS resolution loops. For example: In the previous scenario, no ruleset link should be added to `myeastvnet` because the inbound endpoint at `10.10.0.4` is provisioned in `myeastvnet` and a rule is present that resolves `azure.contoso.com` using the inbound endpoint.<br><br>
+> The rules shown in this article are examples of rules that can be used for specific scenarios. None of the fowarding rules described here are required. Be careful to test your forwarding rules and ensure that the rules don't cause DNS resolution issues.
#### Rule processing
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
Title: Resolve Azure and on-premises domains
-description: Configure Azure and on-premises DNS to resolve private DNS zones and on-premises domains
+ Title: Resolve Azure and on-premises domains.
+description: Configure Azure and on-premises DNS to resolve private DNS zones and on-premises domains.
Previously updated : 10/05/2023 Last updated : 04/05/2024 #Customer intent: As an administrator, I want to resolve on-premises domains in Azure and resolve Azure private zones on-premises.
## Hybrid DNS resolution
-This article provides guidance on how to configure hybrid DNS resolution by using an [Azure DNS Private Resolver](#azure-dns-private-resolver) with a [DNS forwarding ruleset](#dns-forwarding-ruleset).
+This article provides guidance on how to configure hybrid DNS resolution by using an [Azure DNS Private Resolver](#azure-dns-private-resolver) with a [DNS forwarding ruleset](#dns-forwarding-ruleset). In this scenario, your Azure DNS resources are connected to an on-premises network using a VPN or ExpressRoute connection.
*Hybrid DNS resolution* is defined here as enabling Azure resources to resolve your on-premises domains, and on-premises DNS to resolve your Azure private DNS zones.
Create a private zone with at least one resource record to use for testing. The
- [Create a private zone - PowerShell](private-dns-getstarted-powershell.md) - [Create a private zone - CLI](private-dns-getstarted-cli.md)
-In this article, the private zone **azure.contoso.com** and the resource record **test** are used. Autoregistration isn't required for the current demonstration.
+In this article, the private zone **azure.contoso.com** and the resource record **test** are used. Autoregistration isn't required for the current demonstration.
> [!IMPORTANT] > A recursive server is used to forward queries from on-premises to Azure in this example. If the server is authoritative for the parent zone (contoso.com), forwarding is not possible unless you first create a delegation for azure.contoso.com. [ ![View resource records](./media/private-resolver-hybrid-dns/private-zone-records-small.png) ](./media/private-resolver-hybrid-dns/private-zone-records.png#lightbox)
-**Requirement**: You must create a virtual network link in the zone to the virtual network where you deploy your Azure DNS Private Resolver. In the following example, the private zone is linked to two VNets: **myeastvnet** and **mywestvnet**. At least one link is required.
+**Requirement**: You must create a virtual network link in the zone to the virtual network where you deploy your Azure DNS Private Resolver. In the following example, the private zone is linked to two VNets: **myeastvnet** and **mywestvnet**. At least one link is required.
[ ![View zone links](./media/private-resolver-hybrid-dns/private-zone-links-small.png) ](./media/private-resolver-hybrid-dns/private-zone-links.png#lightbox) ## Create an Azure DNS Private Resolver
-The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provided:
+The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provided:
- [Create a private resolver - portal](dns-private-resolver-get-started-portal.md) - [Create a private resolver - PowerShell](dns-private-resolver-get-started-powershell.md)
- When you're finished, write down the IP address of the inbound endpoint for the Azure DNS Private Resolver. In this example, the IP address is **10.10.0.4**. This IP address is used later to configure on-premises DNS conditional forwarders.
+ When you're finished, write down the IP address of the inbound endpoint for the Azure DNS Private Resolver. In this example, the IP address is **10.10.0.4**. This IP address is used later to configure on-premises DNS conditional forwarders.
[ ![View endpoint IP address](./media/private-resolver-hybrid-dns/inbound-endpoint-ip-small.png) ](./media/private-resolver-hybrid-dns/inbound-endpoint-ip.png#lightbox)
Create a forwarding ruleset in the same region as your private resolver. The fol
[ ![View ruleset region](./media/private-resolver-hybrid-dns/forwarding-ruleset-region-small.png) ](./media/private-resolver-hybrid-dns/forwarding-ruleset-region.png#lightbox)
-**Requirement**: You must create a virtual network link to the vnet where your private resolver is deployed. In the following example, two virtual network links are present. The link **myeastvnet-link** is created to a hub vnet where the private resolver is provisioned. There's also a virtual network link **myeastspoke-link** that provides hybrid DNS resolution in a spoke vnet that doesn't have its own private resolver. The spoke network is able to use the private resolver because it peers with the hub network. The spoke vnet link isn't required for the current demonstration.
+**Requirement**: You must create a virtual network link to the vnet where your private resolver is deployed. In the following example, two virtual network links are present. The link **myeastvnet-link** is created to a hub vnet where the private resolver is provisioned. There's also a virtual network link **myeastspoke-link** that provides hybrid DNS resolution in a spoke vnet that doesn't have its own private resolver. The spoke network is able to use the private resolver because it peers with the hub network. The spoke vnet link isn't required for the current demonstration.
[ ![View ruleset links](./media/private-resolver-hybrid-dns/ruleset-links-small.png) ](./media/private-resolver-hybrid-dns/ruleset-links.png#lightbox)
-Next, create a rule in your ruleset for your on-premises domain. In this example, we use **contoso.com**. Set the destination IP address for your rule to be the IP address of your on-premises DNS server. In this example, the on-premises DNS server is at **10.100.0.2**. Verify that the rule is **Enabled**.
+Next, create a rule in your ruleset for your on-premises domain. In this example, we use **contoso.com**. Set the destination IP address for your rule to be the IP address of your on-premises DNS server. In this example, the on-premises DNS server is at **10.100.0.2**. Verify that the rule is **Enabled**.
[ ![View rules](./media/private-resolver-hybrid-dns/ruleset-rules-small.png) ](./media/private-resolver-hybrid-dns/ruleset-rules.png#lightbox)
The procedure to configure on-premises DNS depends on the type of DNS server you
## Demonstrate hybrid DNS
-Using a VM located in the virtual network where the Azure DNS Private Resolver is provisioned, issue a DNS query for a resource record in your on-premises domain. In this example, a query is performed for the record **testdns.contoso.com**:
+Using a VM located in the virtual network where the Azure DNS Private Resolver is provisioned, issue a DNS query for a resource record in your on-premises domain. In this example, a query is performed for the record **testdns.contoso.com**:
![Verify Azure to on-premise](./media/private-resolver-hybrid-dns/azure-to-on-premises-lookup.png)
-The path for the query is: Azure DNS > inbound endpoint > outbound endpoint > ruleset rule for contoso.com > on-premises DNS (10.100.0.2). The DNS server at 10.100.0.2 is an on-premises DNS resolver, but it could also be an authoritative DNS server.
+The path for the query is: Azure DNS > inbound endpoint > outbound endpoint > ruleset rule for contoso.com > on-premises DNS (10.100.0.2). The DNS server at 10.100.0.2 is an on-premises DNS resolver, but it could also be an authoritative DNS server.
Using an on-premises VM or device, issue a DNS query for a resource record in your Azure private DNS zone. In this example, a query is performed for the record **test.azure.contoso.com**:
The path for this query is: client's default DNS resolver (10.100.0.2) > on-prem
* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md). * Understand how to [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md) using the Azure DNS Private Resolver. * Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
-* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
+* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md).
* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure. * [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
energy-data-services How To Enable Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-enable-cors.md
You can set CORS rules for each Azure Data Manager for Energy instance. When you
[![Screenshot of adding new origin.](media/how-to-enable-cors/enable-cors-5.png)](media/how-to-enable-cors/enable-cors-5.png#lightbox) 1. For deleting an existing allowed origin use the icon. [![Screenshot of deleting the existing origin.](media/how-to-enable-cors/enable-cors-6.png)](media/how-to-enable-cors/enable-cors-6.png#lightbox)
- 1. If * ( wildcard all) is added in any of the allowed origins then please ensure to delete all the other individual allowed origins.
+ 1. If * (wildcard all) is added in any of the allowed origins then please ensure to delete all the other individual allowed origins.
1. Once the Allowed origin is added, the state of resource provisioning is in ΓÇ£AcceptedΓÇ¥ and during this time further modifications of CORS policy will not be possible. It takes 15 mins for CORS policies to be updated before update CORS window is available again for modifications.
- [![Screenshot of CORS update window set out.](media/how-to-enable-cors/enable-cors-7.png)](media/how-to-enable-cors/enable-cors-7.png#lightbox)
+ [![Screenshot of CORS update window set out.](media/how-to-enable-cors/cors-update-window.png)](media/how-to-enable-cors/cors-update-window.png#lightbox)
## How are CORS rules evaluated? CORS rules are evaluated as follows:
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
Title: 'Quickstart: Send or receive events using .NET'
description: A quickstart that shows you how to create a .NET Core application that sends events to and receive events from Azure Event Hubs. Previously updated : 03/09/2023 Last updated : 04/05/2024 ms.devlang: csharp
+#customer intent: As a .NET developer, I want to learn how to send events to an event hub and receive events from the event hub using C#.
# Quickstart: Send events to and receive events from Azure Event Hubs using .NET In this quickstart, you learn how to send events to an event hub and then receive those events from the event hub using the **Azure.Messaging.EventHubs** .NET library. > [!NOTE]
-> Quickstarts are for you to quickly ramp up on the service. If you are already familiar with the service, you may want to see .NET samples for Event Hubs in our .NET SDK repository on GitHub: [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples), [Event processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples).
+> Quickstarts are for you to quickly ramp up on the service. If you are already familiar with the service, you might want to see .NET samples for Event Hubs in our .NET SDK repository on GitHub: [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples), [Event processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples).
## Prerequisites If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you go through this quickstart.
This section shows you how to create a .NET Core console application to send eve
```csharp A batch of 3 events has been published. ```
-4. On the **Event Hubs Namespace** page in the Azure portal, you see three incoming messages in the **Messages** chart. Refresh the page to update the chart if needed. It may take a few seconds for it to show that the messages have been received.
+
+ > [!IMPORTANT]
+ > If you are using the Passwordless (Azure Active Directory's Role-based Access Control) authentication, select **Tools**, then select **Options**. In the **Options** window, expand **Azure Service Authentication**, and select **Account Selection**. Confirm that you are using the account that was added to the **Azure Event Hubs Data Owner** role on the Event Hubs namespace.
+4. On the **Event Hubs Namespace** page in the Azure portal, you see three incoming messages in the **Messages** chart. Refresh the page to update the chart if needed. It might take a few seconds for it to show that the messages have been received.
:::image type="content" source="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal.png" alt-text="Image of the Azure portal page to verify that the event hub received the events" lightbox="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal.png":::
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
[Get the connection string to the storage account](../storage/common/storage-account-get-info.md#get-a-connection-string-for-the-storage-account)
-Note down the connection string and the container name. You use them in the receive code.
+Note down the connection string and the container name. You use them in the code to receive events from the event hub.
### Create a project for the receiver
Replace the contents of **Program.cs** with the following code:
{ // Write the body of the event to the console window Console.WriteLine("\tReceived event: {0}", Encoding.UTF8.GetString(eventArgs.Data.Body.ToArray()));
- Console.ReadLine();
return Task.CompletedTask; }
Replace the contents of **Program.cs** with the following code:
// Write details about the error to the console window Console.WriteLine($"\tPartition '{eventArgs.PartitionId}': an unhandled exception was encountered. This was not expected to happen."); Console.WriteLine(eventArgs.Exception.Message);
- Console.ReadLine();
return Task.CompletedTask; } ```
Replace the contents of **Program.cs** with the following code:
> [!NOTE] > For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/Sample01_HelloWorld.md). 3. Run the receiver application.
-4. You should see a message that the events have been received.
+4. You should see a message that the events have been received. Press ENTER after you see a received event message.
```bash Received event: Event 1
Replace the contents of **Program.cs** with the following code:
Received event: Event 3 ``` These events are the three events you sent to the event hub earlier by running the sender program.
-5. In the Azure portal, you can verify that there are three outgoing messages, which Event Hubs sent to the receiving application. Refresh the page to update the chart. It may take a few seconds for it to show that the messages have been received.
+5. In the Azure portal, you can verify that there are three outgoing messages, which Event Hubs sent to the receiving application. Refresh the page to update the chart. It might take a few seconds for it to show that the messages have been received.
:::image type="content" source="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png" alt-text="Image of the Azure portal page to verify that the event hub sent events to the receiving app" lightbox="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png":::
Azure Schema Registry of Event Hubs provides a centralized repository for managi
To learn more, see [Validate schemas with Event Hubs SDK](schema-registry-dotnet-send-receive-quickstart.md).
-## Clean up resources
-Delete the resource group that has the Event Hubs namespace or delete only the namespace if you want to keep the resource group.
## Samples and reference This quick start provides step-by-step instructions to implement a scenario of sending a batch of events to an event hub and then receiving them. For more samples, select the following links.
This quick start provides step-by-step instructions to implement a scenario of s
For complete .NET library reference, see our [SDK documentation](/dotnet/api/overview/azure/event-hubs).
-## Next steps
+## Clean up resources
+Delete the resource group that has the Event Hubs namespace or delete only the namespace if you want to keep the resource group.
+
+## Related content
See the following tutorial: > [!div class="nextstepaction"]
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-node-get-started-send.md
Title: Send or receive events from Azure Event Hubs using JavaScript
+ Title: Send or receive events using JavaScript
description: This article provides a walkthrough for creating a JavaScript application that sends/receives events to/from Azure Event Hubs. Previously updated : 01/04/2023 Last updated : 04/05/2024 ms.devlang: javascript
+#customer intent: As a JavaScript developer, I want to learn how to send events to an event hub and receive events from the event hub using C#.
-# Send events to or receive events from event hubs by using JavaScript
-This quickstart shows how to send events to and receive events from an event hub using the **@azure/event-hubs** npm package.
+# Quickstart: Send events to or receive events from event hubs by using JavaScript
+In this Quickstart, you learn how to send events to and receive events from an event hub using the **@azure/event-hubs** npm package.
## Prerequisites
If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md
To complete this quickstart, you need the following prerequisites: -- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).
+- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/).
- Node.js LTS. Download the latest [long-term support (LTS) version](https://nodejs.org). - Visual Studio Code (recommended) or any other integrated development environment (IDE). - **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md).
In this section, you create a JavaScript application that sends events to an eve
-1. Run `node send.js` to execute this file. This command sends a batch of three events to your event hub.
-1. In the Azure portal, verify that the event hub has received the messages. Refresh the page to update the chart. It might take a few seconds for it to show that the messages have been received.
+1. Run `node send.js` to execute this file. This command sends a batch of three events to your event hub. If you're using the Passwordless (Azure Active Directory's Role-based Access Control) authentication, you might want to run `az login` and sign into Azure using the account that was added to the Azure Event Hubs Data Owner role.
+1. In the Azure portal, verify that the event hub received the messages. Refresh the page to update the chart. It might take a few seconds for it to show that the messages are received.
[![Verify that the event hub received the messages](./media/node-get-started-send/verify-messages-portal.png)](./media/node-get-started-send/verify-messages-portal.png#lightbox) > [!NOTE] > For the complete source code, including additional informational comments, go to the [GitHub sendEvents.js page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/event-hubs/samples/v5/javascript/sendEvents.js).
- You have now sent events to an event hub.
--
+
## Receive events In this section, you receive events from an event hub by using an Azure Blob storage checkpoint store in a JavaScript application. It performs metadata checkpoints on received messages at regular intervals in an Azure Storage blob. This approach makes it easy to continue receiving messages later from where you left off.
To create an Azure storage account and a blob container in it, do the following
[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md).
-Note the connection string and the container name. You'll use them in the receive code.
+Note the connection string and the container name. You use them in the code to receive events.
### Install the npm packages to receive events
-For the receiving side, you need to install two more packages. In this quickstart, you use Azure Blob storage to persist checkpoints so that the program doesn't read the events that it has already read. It performs metadata checkpoints on received messages at regular intervals in a blob. This approach makes it easy to continue receiving messages later from where you left off.
+For the receiving side, you need to install two more packages. In this quickstart, you use Azure Blob storage to persist checkpoints so that the program doesn't read the events that it already read. It performs metadata checkpoints on received messages at regular intervals in a blob. This approach makes it easy to continue receiving messages later from where you left off.
### [Passwordless (Recommended)](#tab/passwordless)
npm install @azure/eventhubs-checkpointstore-blob
1. Run `node receive.js` in a command prompt to execute this file. The window should display messages about received events.
- ```
+ ```bash
C:\Self Study\Event Hubs\JavaScript>node receive.js Received event: 'First event' from partition: '0' and consumer group: '$Default' Received event: 'Second event' from partition: '0' and consumer group: '$Default' Received event: 'Third event' from partition: '0' and consumer group: '$Default' ```+ > [!NOTE] > For the complete source code, including additional informational comments, go to the [GitHub receiveEventsUsingCheckpointStore.js page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsUsingCheckpointStore.js).
-You have now received events from your event hub. The receiver program will receive events from all the partitions of the default consumer group in the event hub.
+ The receiver program receives events from all the partitions of the default consumer group in the event hub.
+
+## Clean up resources
+Delete the resource group that has the Event Hubs namespace or delete only the namespace if you want to keep the resource group.
-## Next steps
+## Related content
Check out these samples on GitHub: - [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/event-hubs/samples/v5/javascript)
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
Title: Monitoring Azure Event Hubs
description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Event Hubs. Previously updated : 03/01/2023 Last updated : 04/05/2024 # Monitor Azure Event Hubs
See [Create diagnostic setting to collect platform logs and metrics in Azure](..
If you use **Azure Storage** to store the diagnostic logging information, the information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar. ### Azure Event Hubs
-If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select an existing event hub except for the event hub for which you are configuring diagnostic settings.
+If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select an existing event hub except for the event hub for which you're configuring diagnostic settings.
### Log Analytics If you use **Log Analytics** to store the diagnostic logging information, the information is stored in tables named **AzureDiagnostics** / **AzureMetrics** or **resource specific tables**
The metrics and logs you can collect are discussed in the following sections.
## Analyze metrics You can analyze metrics for Azure Event Hubs, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Event Hubs data reference metrics](monitor-event-hubs-reference.md#metrics).
-![Metrics Explorer with Event Hubs namespace selected](./media/monitor-event-hubs/metrics.png)
For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
For reference, you can see a list of [all resource metrics supported in Azure Mo
### Filter and split For metrics that support dimensions, you can apply filters using a dimension value. For example, add a filter with `EntityName` set to the name of an event hub. You can also split a metric by dimension to visualize how different segments of the metric compare with each other. For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md). ## Analyze logs Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable __Send information to Log Analytics__. For more information, see the [Collection and routing](#collection-and-routing) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Event Hubs stores data in the following tables: **AzureDiagnostics** and **AzureMetrics**.
Using *Runtime audit logs* you can capture aggregated diagnostic information for
> Runtime audit logs are available only in **premium** and **dedicated** tiers. ### Enable runtime logs
-You can enable either runtime audit logs or application metrics logs by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Click on *Add diagnostic setting* as shown below.
+You can enable either runtime audit or application metrics logging by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Select **Add diagnostic setting** as shown in the following image.
-![Screenshot showing the Diagnostic settings page.](./media/monitor-event-hubs/add-diagnostic-settings.png)
Then you can enable log categories *RuntimeAuditLogs* or *ApplicationMetricsLogs* as needed.
-![Screenshot showing the selection of RuntimeAuditLogs and ApplicationMetricsLogs.](./media/monitor-event-hubs/configure-diagnostic-settings.png)
-Once runtime logs are enabled, Event Hubs will start collecting and storing them according to the diagnostic setting configuration.
+
+Once runtime logs are enabled, Event Hubs start collecting and storing them according to the diagnostic setting configuration.
### Publish and consume sample data
-To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications which are based on [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) (AMQP) or using any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md).
+To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications, which are based on [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md), which uses Advanced Message Queuing Protocol (AMQP) or using any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md).
### Analyze runtime audit logs
AZMSRuntimeAuditLogs
Up on the execution of the query you should be able to obtain corresponding audit logs in the following format. :::image type="content" source="./media/monitor-event-hubs/runtime-audit-logs.png" alt-text="Image showing the result of a sample query to analyze runtime audit logs." lightbox="./media/monitor-event-hubs/runtime-audit-logs.png":::
-By analyzing these logs you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs are defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
+By analyzing these logs, you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs is defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
### Analyze application metrics
AZMSApplicationMetricLogs
| where Provider == "EVENTHUB" ```
-Application metrics includes the following runtime metrics.
+Application metrics include the following runtime metrics.
:::image type="content" source="./media/monitor-event-hubs/application-metrics-logs.png" alt-text="Image showing the result of a sample query to analyze application metrics." lightbox="./media/monitor-event-hubs/application-metrics-logs.png":::
-Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Each field associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
+Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Fields associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
## Alerts
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethern
For more information, see the [ExpressRoute FAQ](expressroute-faqs.md).
+## ExpressRoute cheat sheet
+
+Quickly access the most important ExpressRoute resources and information with this [cheat sheet](https://download.microsoft.com/download/b/9/2/b92e3598-6e2e-4327-a87f-8dc210abca6c/AzureNetworking-ExRCheatSheet-v1-2.pdf).
++ ## Features ### Layer 3 connectivity
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 02/27/2024 Last updated : 04/05/2024
The following table shows connectivity locations and the service providers for e
| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | PitChile | | **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks<br/>Ascenty Data Centers<br/>British Telecom<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Neutrona Networks<br/>Orange<br/>RedCLARA<br/>Tata Communications<br/>Telefonica<br/>UOLDIVEO | | **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers<br/>Tivit |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>PacketFabric<br/>Telus<br/>Zayo |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>Pacific Northwest Gigapop<br/>PacketFabric<br/>Telus<br/>Zayo |
| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX<br/>KT<br/>LG CNS<br/>LGUplus<br/>Equinix<br/>Sejong Telecom<br/>SK Telecom | | **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT | | **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 01/26/2024 Last updated : 04/05/2024
The following table shows locations by service provider. If you want to view ava
| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | | **[Orange Poland](https://www.orange.pl/duze-firmy/rozwiazania-chmurowe)** | Supported | Supported | Warsaw | | **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 |
+| **Pacific Northwest Gigapop** | Supported | Supported | Seattle |
| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 | | **PitChile** | Supported | Supported | Santiago<br/>Miami |
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
The following use case is supported by [Azure Web Application Firewall on Azure
To protect internal servers or applications hosted in Azure from malicious requests that arrive from the Internet or an external network. Application Gateway provides end-to-end encryption.
+ For related information, see:
+
+ - [Azure Firewall Premium and name resolution](/azure/architecture/example-scenario/gateway/application-gateway-before-azure-firewall)
+ - [Application Gateway before Firewall](/azure/architecture/example-scenario/gateway/firewall-application-gateway)
> [!TIP] > TLS 1.0 and 1.1 are being deprecated and wonΓÇÖt be supported. TLS 1.0 and 1.1 versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable, and while they still currently work to allow backwards compatibility, they aren't recommended. Migrate to TLS 1.2 as soon as possible.
frontdoor Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/domain.md
After you've imported your certificate to a key vault, create an Azure Front Doo
Then, configure your domain to use the Azure Front Door secret for its TLS certificate.
-For a guided walkthrough of these steps, see [Configure HTTPS on an Azure Front Door custom domain using the Azure portal](standard-premium/how-to-configure-https-custom-domain.md#using-your-own-certificate).
+For a guided walkthrough of these steps, see [Configure HTTPS on an Azure Front Door custom domain using the Azure portal](standard-premium/how-to-configure-https-custom-domain.md#use-your-own-certificate).
### Switch between certificate types
frontdoor Front Door How To Onboard Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
Title: Onboard a root or apex domain to Azure Front Door
-description: Learn how to onboard a root or apex domain to an existing Azure Front Door using the Azure portal.
+description: Learn how to onboard a root or apex domain to an existing Azure Front Door by using the Azure portal.
zone_pivot_groups: front-door-tiers
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-Azure Front Door uses CNAME records to validate domain ownership for the onboarding of custom domains. Azure Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
+Azure Front Door uses CNAME records to validate domain ownership for the onboarding of custom domains. Azure Front Door doesn't expose the front-end IP address associated with your Azure Front Door profile. So, you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
-The Domain Name System (DNS) protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who load balances applications behind Azure Front Door. Since using an Azure Front Door profile requires creation of a CNAME record, it isn't possible to point at the Azure Front Door profile from the zone apex.
+The Domain Name System (DNS) protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`, you can create CNAME records for `somelabel.contoso.com`, but you can't create a CNAME record for `contoso.com` itself. This restriction presents a problem for application owners who load balance applications behind Azure Front Door. Because using an Azure Front Door profile requires creation of a CNAME record, it isn't possible to point at the Azure Front Door profile from the zone apex.
-This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to an Azure Front Door profile that has public endpoints. Application owners can point to the same Azure Front Door profile used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Azure Front Door profile.
+You can resolve this problem by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to an Azure Front Door profile that has public endpoints. Application owners can point to the same Azure Front Door profile used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Azure Front Door profile.
Mapping your apex or root domain to your Azure Front Door profile requires *CNAME flattening* or *DNS chasing*, which is when the DNS provider recursively resolves CNAME entries until it resolves an IP address. Azure DNS supports this functionality for Azure Front Door endpoints. > [!NOTE]
-> There are other DNS providers as well that support CNAME flattening or DNS chasing. However, Azure Front Door recommends using Azure DNS for its customers for hosting their domains.
+> Other DNS providers support CNAME flattening or DNS chasing. However, Azure Front Door recommends using Azure DNS for its customers for hosting their domains.
-You can use the Azure portal to onboard an apex domain on your Azure Front Door and enable HTTPS on it by associating it with a Transport Layer Security (TLS) certificate. Apex domains are also referred as *root* or *naked* domains.
+You can use the Azure portal to onboard an apex domain on your Azure Front Door and enable HTTPS on it by associating it with a Transport Layer Security (TLS) certificate. Apex domains are also referred to as *root* or *naked* domains.
::: zone-end
You can use the Azure portal to onboard an apex domain on your Azure Front Door
## Onboard the custom domain to your Azure Front Door profile
-1. Select **Domains** from under *Settings* on the left side pane for your Azure Front Door profile and then select **+ Add** to add a new custom domain.
+1. Under **Settings**, select **Domains** for your Azure Front Door profile. Then select **+ Add** to add a new custom domain.
- :::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot of adding a new domain to an Azure Front Door profile.":::
+ :::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot that shows adding a new domain to an Azure Front Door profile.":::
-1. On **Add a domain** page, you enter information about the custom domain. You can choose Azure-managed DNS (recommended) or you can choose to use your DNS provider.
+1. On the **Add a domain** pane, you enter information about the custom domain. You can choose Azure-managed DNS (recommended), or you can choose to use your DNS provider.
- - **Azure-managed DNS** - select an existing DNS zone and for *Custom domain*, select **Add new**. Select **APEX domain** from the pop-up and then select **OK** to save.
+ - **Azure-managed DNS**: Select an existing DNS zone. For **Custom domain**, select **Add new**. Select **APEX domain** from the pop-up. Then select **OK** to save.
- :::image type="content" source="./media/front-door-apex-domain/add-custom-domain.png" alt-text="Screenshot of adding a new custom domain to an Azure Front Door profile.":::
+ :::image type="content" source="./media/front-door-apex-domain/add-custom-domain.png" alt-text="Screenshot that shows adding a new custom domain to an Azure Front Door profile.":::
- - **Another DNS provider** - make sure the DNS provider supports CNAME flattening and follow the steps for [adding a custom domain](standard-premium/how-to-add-custom-domain.md#add-a-new-custom-domain).
+ - **Another DNS provider**: Make sure the DNS provider supports CNAME flattening and follow the steps for [adding a custom domain](standard-premium/how-to-add-custom-domain.md#add-a-new-custom-domain).
-1. Select the **Pending** validation state. A new page appears with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`.
+1. Select the **Pending** validation state. A new pane appears with the DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`.
- :::image type="content" source="./media/front-door-apex-domain/pending-validation.png" alt-text="Screenshot of custom domain pending validation.":::
+ :::image type="content" source="./media/front-door-apex-domain/pending-validation.png" alt-text="Screenshot that shows the custom domain Pending validation.":::
- - **Azure DNS-based zone** - select the **Add** button to create a new TXT record with the displayed value in the Azure DNS zone.
+ - **Azure DNS-based zone**: Select **Add** to create a new TXT record with the value that appears in the Azure DNS zone.
- :::image type="content" source="./media/front-door-apex-domain/validate-custom-domain.png" alt-text="Screenshot of validate a new custom domain.":::
+ :::image type="content" source="./media/front-door-apex-domain/validate-custom-domain.png" alt-text="Screenshot that shows validating a new custom domain.":::
- - If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
+ - If you're using another DNS provider, manually create a new TXT record with the name `_dnsauth.<your_subdomain>` with the record value as shown on the pane.
-1. Close the *Validate the custom domain* page and return to the *Domains* page for the Azure Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved, make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
+1. Close the **Validate the custom domain** pane and return to the **Domains** pane for the Azure Front Door profile. You should see **Validation state** change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to appear. If your validation doesn't get approved, make sure your TXT record is correct and that name servers are configured correctly if you're using Azure DNS.
- :::image type="content" source="./media/front-door-apex-domain/validation-approved.png" alt-text="Screenshot of new custom domain passing validation.":::
+ :::image type="content" source="./media/front-door-apex-domain/validation-approved.png" alt-text="Screenshot that shows a new custom domain passing validation.":::
-1. Select **Unassociated** from the *Endpoint association* column, to add the new custom domain to an endpoint.
+1. Select **Unassociated** from the **Endpoint association** column to add the new custom domain to an endpoint.
- :::image type="content" source="./media/front-door-apex-domain/unassociated-endpoint.png" alt-text="Screenshot of unassociated custom domain to an endpoint.":::
+ :::image type="content" source="./media/front-door-apex-domain/unassociated-endpoint.png" alt-text="Screenshot that shows an unassociated custom domain added to an endpoint.":::
-1. On the *Associate endpoint and route* page, select the **Endpoint** and **Route** you would like to associate the domain to. Then select **Associate** to complete this step.
+1. On the **Associate endpoint and route** pane, select the endpoint and route to which you want to associate the domain. Then select **Associate**.
- :::image type="content" source="./media/front-door-apex-domain/associate-endpoint.png" alt-text="Screenshot of associated endpoint and route page for a domain.":::
+ :::image type="content" source="./media/front-door-apex-domain/associate-endpoint.png" alt-text="Screenshot that shows the associated endpoint and route pane for a domain.":::
-1. Under the *DNS state* column, select the **CNAME record is currently not detected** to add the alias record to DNS provider.
+1. Under the **DNS state** column, select **CNAME record is currently not detected** to add the alias record to the DNS provider.
- - **Azure DNS** - select the **Add** button on the page.
+ - **Azure DNS**: Select **Add**.
- :::image type="content" source="./media/front-door-apex-domain/cname-record.png" alt-text="Screenshot of add or update CNAME record page.":::
+ :::image type="content" source="./media/front-door-apex-domain/cname-record.png" alt-text="Screenshot that shows the Add or update the CNAME record pane.":::
- - **A DNS provider that supports CNAME flattening** - you must manually enter the alias record name.
+ - **A DNS provider that supports CNAME flattening**: You must manually enter the alias record name.
-1. Once the alias record gets created and the custom domain is associated to the Azure Front Door endpoint, traffic starts flowing.
+1. After the alias record gets created and the custom domain is associated with the Azure Front Door endpoint, traffic starts flowing.
- :::image type="content" source="./media/front-door-apex-domain/cname-record-added.png" alt-text="Screenshot of completed APEX domain configuration.":::
+ :::image type="content" source="./media/front-door-apex-domain/cname-record-added.png" alt-text="Screenshot that shows the completed APEX domain configuration.":::
> [!NOTE]
-> * The **DNS state** column is used for CNAME mapping check. Since an apex domain doesnΓÇÖt support a CNAME record, the DNS state will show 'CNAME record is currently not detected' even after you add the alias record to the DNS provider.
-> * When placing service like an Azure Web App behind Azure Front Door, you need to configure with the web app with the same domain name as the root domain in Azure Front Door. You also need to configure the backend host header with that domain name to prevent a redirect loop.
-> * Apex domains don't have CNAME records pointing to the Azure Front Door profile, therefore managed certificate autorotation will always fail unless domain validation is completed between rotations.
+> * The **DNS state** column is used for CNAME mapping check. An apex domain doesn't support a CNAME record, so the DNS state shows **CNAME record is currently not detected** even after you add the alias record to the DNS provider.
+> * When you place a service like an Azure Web App behind Azure Front Door, you need to configure the web app with the same domain name as the root domain in Azure Front Door. You also need to configure the back-end host header with that domain name to prevent a redirect loop.
+> * Apex domains don't have CNAME records pointing to the Azure Front Door profile. Managed certificate autorotation always fails unless domain validation is finished between rotations.
## Enable HTTPS on your custom domain
Follow the guidance for [configuring HTTPS for your custom domain](standard-prem
1. Create or edit the record for zone apex.
-1. Select the record **type** as *A* record and then select *Yes* for **Alias record set**. **Alias type** should be set to *Azure resource*.
+1. Select the record type as **A**. For **Alias record set**, select **Yes**. Set **Alias type** to **Azure resource**.
-1. Select the Azure subscription that contains your Azure Front Door profile. Then select the Azure Front Door resource from the **Azure resource** dropdown.
+1. Select the Azure subscription that contains your Azure Front Door profile. Then select the Azure Front Door resource from the **Azure resource** dropdown list.
1. Select **OK** to submit your changes.
- :::image type="content" source="./media/front-door-apex-domain/front-door-apex-alias-record.png" alt-text="Alias record for zone apex":::
+ :::image type="content" source="./media/front-door-apex-domain/front-door-apex-alias-record.png" alt-text="Screenshot that shows an alias record for zone apex.":::
-1. The above step creates a zone apex record pointing to your Azure Front Door resource and also a CNAME record mapping *afdverify* (example - `afdverify.contosonews.com`) that is used for onboarding the domain on your Azure Front Door profile.
+1. The preceding step creates a zone apex record that points to your Azure Front Door resource. It also creates a CNAME record mapping **afdverify** (for example, `afdverify.contosonews.com`) that's used for onboarding the domain on your Azure Front Door profile.
## Onboard the custom domain on your Azure Front Door
-1. On the Azure Front Door designer tab, select on '+' icon on the Frontend hosts section to add a new custom domain.
+1. On the Azure Front Door designer tab, select the **+** icon on the **Frontend hosts** section to add a new custom domain.
-1. Enter the root or apex domain name in the custom host name field, example `contosonews.com`.
+1. Enter the root or apex domain name in the **Custom host name** field. An example is `contosonews.com`.
-1. Once the CNAME mapping from the domain to your Azure Front Door is validated, select on **Add** to add the custom domain.
+1. After the CNAME mapping from the domain to your Azure Front Door is validated, select **Add** to add the custom domain.
1. Select **Save** to submit the changes.
- :::image type="content" source="./media/front-door-apex-domain/front-door-onboard-apex-domain.png" alt-text="Custom domain menu":::
+ :::image type="content" source="./media/front-door-apex-domain/front-door-onboard-apex-domain.png" alt-text="Screenshot that shows the Add a custom domain pane.":::
## Enable HTTPS on your custom domain
-1. Select the custom domain that was added and under the section **Custom domain HTTPS**, change the status to **Enabled**.
+1. Select the custom domain that was added. Under the section **Custom domain HTTPS**, change the status to **Enabled**.
-1. Select the **Certificate management type** to *'Use my own certificate'*.
+1. For **Certificate management type**, select **Use my own certificate**.
- :::image type="content" source="./media/front-door-apex-domain/front-door-onboard-apex-custom-domain.png" alt-text="Custom domain HTTPS settings":::
+ :::image type="content" source="./media/front-door-apex-domain/front-door-onboard-apex-custom-domain.png" alt-text="Screenshot that shows Custom domain HTTPS settings":::
> [!WARNING]
- > Azure Front Door managed certificate management type is not currently supported for apex or root domains. The only option available for enabling HTTPS on an apex or root domain for Azure Front Door is using your own custom TLS/SSL certificate hosted on Azure Key Vault.
+ > An Azure Front Door-managed certificate management type isn't currently supported for apex or root domains. The only option available for enabling HTTPS on an apex or root domain for Azure Front Door is to use your own custom TLS/SSL certificate hosted on Azure Key Vault.
-1. Ensure that you have setup the right permissions for Azure Front Door to access your key Vault as noted in the UI, before proceeding to the next step.
+1. Ensure that you set up the right permissions for Azure Front Door to access your key vault, as noted in the UI, before you proceed to the next step.
-1. Choose a **Key Vault account** from your current subscription and then select the appropriate **Secret** and **Secret version** to map to the right certificate.
+1. Choose a **Key Vault account** from your current subscription. Then select the appropriate **Secret** and **Secret version** to map to the right certificate.
-1. Select **Update** to save the selection and then Select **Save**.
+1. Select **Update** to save the selection. Then select **Save**.
-1. Select **Refresh** after a couple of minutes and then select the custom domain again to see the progress of certificate provisioning.
+1. Select **Refresh** after a couple of minutes. Then select the custom domain again to see the progress of certificate provisioning.
> [!WARNING]
-> Ensure that you have created appropriate routing rules for your apex domain or added the domain to existing routing rules.
+> Ensure that you created appropriate routing rules for your apex domain or added the domain to existing routing rules.
::: zone-end
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
Title: 'How to add a custom domain - Azure Front Door'
-description: In this article, you learn how to onboard a custom domain to Azure Front Door profile using the Azure portal.
+description: In this article, you learn how to onboard a custom domain to an Azure Front Door profile by using the Azure portal.
Last updated 09/07/2023
-#Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
+#Customer intent: As a website owner, I want to add a custom domain to my Azure Front Door configuration so that my users can use my custom domain to access my content.
-# Configure a custom domain on Azure Front Door using the Azure portal
+# Configure a custom domain on Azure Front Door by using the Azure portal
-When you use Azure Front Door for application delivery, a custom domain is necessary if you would like your own domain name to be visible in your end-user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes.
+When you use Azure Front Door for application delivery, a custom domain is necessary if you want your own domain name to be visible in your user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes.
-After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of `azurefd.net`. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door owned domain name. For example, `https://www.contoso.com/photo.png`.
+After you create an Azure Front Door Standard/Premium profile, the default front-end host has the subdomain `azurefd.net`. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your back end by default. An example is `https://contoso-frontend.azurefd.net/activeusers.htm`.
-## Prerequisites
+For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of a domain name that Azure Front Door owns. An example is `https://www.contoso.com/photo.png`.
-* Before you can complete the steps in this tutorial, you must first create an Azure Front Door profile. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md).
+## Prerequisites
+* Before you can finish the steps in this tutorial, you must first create an Azure Front Door profile. For more information, see [Quickstart: Create an Azure Front Door Standard/Premium](create-front-door-portal.md).
* If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).- * If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records. ## Add a new custom domain
After you create an Azure Front Door Standard/Premium profile, the default front
> [!NOTE] > If a custom domain is validated in an Azure Front Door or a Microsoft CDN profile already, then it can't be added to another profile.
-A custom domain is configured on the **Domains** page of the Azure Front Door profile. A custom domain can be set up and validated prior to endpoint association. A custom domain and its subdomains can only be associated with a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Azure Front Door profiles. You may also map custom domains with different subdomains to the same Azure Front Door endpoint.
+A custom domain is configured on the **Domains** pane of the Azure Front Door profile. A custom domain can be set up and validated before endpoint association. A custom domain and its subdomains can only be associated with a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Azure Front Door profiles. You can also map custom domains with different subdomains to the same Azure Front Door endpoint.
-1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** button.
+1. Under **Settings**, select **Domains** for your Azure Front Door profile. Then select **+ Add**.
- :::image type="content" source="../media/how-to-add-custom-domain/add-domain-button.png" alt-text="Screenshot of add domain button on domain landing page.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/add-domain-button.png" alt-text="Screenshot that shows the Add a domain button on the domain landing pane.":::
-1. On the *Add a domain* page, select the **Domain type**. You can select between a **Non-Azure validated domain** or an **Azure pre-validated domain**.
+1. On the **Add a domain** pane, select the domain type. You can choose **Non-Azure validated domain** or **Azure pre-validated domain**.
- * **Non-Azure validated domain** is a domain that requires ownership validation. When you select Non-Azure validated domain, the recommended DNS management option is to use Azure-managed DNS. You may also use your own DNS provider. If you choose Azure-managed DNS, select an existing DNS zone. Then select an existing custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Then select **Add** to add your custom domain.
+ * **Non-Azure validated domain** is a domain that requires ownership validation. When you select **Non-Azure validated domain**, we recommend that you use the Azure-managed DNS option. You might also use your own DNS provider. If you choose an Azure-managed DNS, select an existing DNS zone. Then select an existing custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Then select **Add** to add your custom domain.
- :::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot of add a domain page.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot that shows the Add a domain pane.":::
- * **Azure pre-validated domain** is a domain already validated by another Azure service. When you select this option, domain ownership validation isn't required from Azure Front Door. A dropdown list of validated domains by different Azure services appear.
+ * **Azure pre-validated domain** is a domain already validated by another Azure service. When you select this option, domain ownership validation isn't required from Azure Front Door. A dropdown list of validated domains by different Azure services appears.
- :::image type="content" source="../media/how-to-add-custom-domain/pre-validated-custom-domain.png" alt-text="Screenshot of prevalidated custom domain in add a domain page.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/pre-validated-custom-domain.png" alt-text="Screenshot that shows Pre-validated custom domains on the Add a domain pane.":::
> [!NOTE]
- > * Azure Front Door supports both Azure managed certificate and Bring Your Own Certificates. For Non-Azure validated domain, the Azure managed certificate is issued and managed by the Azure Front Door. For Azure pre-validated domain, the Azure managed certificate gets issued and is managed by the Azure service that validates the domain. To use own certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
- > * Azure Front Door supports Azure pre-validated domains and Azure DNS zones in different subscriptions.
- > * Currently Azure pre-validated domains only supports domains validated by Static Web App.
+ > * Azure Front Door supports both Azure-managed certificates and Bring Your Own Certificates (BYOCs). For a non-Azure validated domain, the Azure-managed certificate is issued and managed by Azure Front Door. For an Azure prevalidated domain, the Azure-managed certificate gets issued and is managed by the Azure service that validates the domain. To use your own certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
+ > * Azure Front Door supports Azure prevalidated domains and Azure DNS zones in different subscriptions.
+ > * Currently, Azure prevalidated domains only support domains validated by Azure Static Web Apps.
A new custom domain has a validation state of **Submitting**.
- :::image type="content" source="../media/how-to-add-custom-domain/validation-state-submitting.png" alt-text="Screenshot of domain validation state submitting.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/validation-state-submitting.png" alt-text="Screenshot that shows the domain validation state as Submitting.":::
> [!NOTE]
- > * Starting September 2023, Azure Front Door supports Bring Your Own Certificates (BYOC) based domain ownership validation. Front Door will automatically approve the domain ownership so long as the Certificate Name (CN) or Subject Alternative Name (SAN) of provided certificate matches the custom domain. When you select Azure managed certificate, the domain ownership will continue to be valdiated via the DNS TXT record.
- > * For custom domains created before BYOC based validation is supported and the domain validation status is anything but **Approved**, you need to trigger the auto approval of the domain ownership validation by selecting the **Validation State** and then click on the **Revalidate** button in the portal. If you're using the command line tool, you can trigger domain validation by sending an empty PATCH request to the domain API.
- > * An Azure pre-validated domain will have a validation state of **Pending** and will automatically change to **Approved** after a few minutes. Once validation gets approved, skip to [**Associate the custom domain to your Front Door endpoint**](#associate-the-custom-domain-with-your-azure-front-door-endpoint) and complete the remaining steps.
+ > * As of September 2023, Azure Front Door now supports BYOC-based domain ownership validation. Azure Front Door automatically approves the domain ownership if the Certificate Name (CN) or Subject Alternative Name (SAN) of the provided certificate matches the custom domain. When you select **Azure managed certificate**, the domain ownership continues to be validated via the DNS TXT record.
+ > * For custom domains created before BYOC-based validation is supported and the domain validation status is anything but **Approved**, you need to trigger the auto-approval of the domain ownership validation by selecting **Validation State** > **Revalidate** in the portal. If you're using the command-line tool, you can trigger domain validation by sending an empty `PATCH` request to the domain API.
+ > * An Azure prevalidated domain has a validation state of **Pending**. It automatically changes to **Approved** after a few minutes. After validation gets approved, skip to [Associate the custom domain to your Front Door endpoint](#associate-the-custom-domain-with-your-azure-front-door-endpoint) and finish the remaining steps.
- The validation state will change to **Pending** after a few minutes.
+ After a few minutes, the validation state changes to **Pending**.
- :::image type="content" source="../media/how-to-add-custom-domain/validation-state-pending.png" alt-text="Screenshot of domain validation state pending.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/validation-state-pending.png" alt-text="Screenshot that shows the domain validation state as Pending.":::
-1. Select the **Pending** validation state. A new page appears with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`. If you're using Azure DNS-based zone, select the **Add** button, and a new TXT record with the displayed record value gets created in the Azure DNS zone. If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
+1. Select the **Pending** validation state. A new pane appears with DNS TXT record information that's needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`. If you're using an Azure DNS-based zone, select **Add**. A new TXT record with the record value that appears is created in the Azure DNS zone. If you're using another DNS provider, manually create a new TXT record named `_dnsauth.<your_subdomain>`, with the record value as shown on the pane.
- :::image type="content" source="../media/how-to-add-custom-domain/validate-custom-domain.png" alt-text="Screenshot of validate custom domain page.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/validate-custom-domain.png" alt-text="Screenshot that shows the Validate the custom domain pane.":::
-1. Close the page to return to custom domains list landing page. The provisioning state of custom domain should change to **Provisioned** and validation state should change to **Approved**.
+1. Close the pane to return to the custom domains list landing pane. The provisioning state of the custom domain should change to **Provisioned**. The validation state should change to **Approved**.
- :::image type="content" source="../media/how-to-add-custom-domain/provisioned-approved-status.png" alt-text="Screenshot of provisioned and approved status.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/provisioned-approved-status.png" alt-text="Screenshot that shows the Provisioning state and the Approved status.":::
For more information about domain validation states, see [Domains in Azure Front Door](../domain.md#domain-validation). ## Associate the custom domain with your Azure Front Door endpoint
-After you validate your custom domain, you can associate it to your Azure Front Door Standard/Premium endpoint.
+After you validate your custom domain, you can associate it with your Azure Front Door Standard/Premium endpoint.
-1. Select the **Unassociated** link to open the **Associate endpoint and routes** page. Select an endpoint and routes you want to associate the domain with. Then select **Associate** to update your configuration.
+1. Select the **Unassociated** link to open the **Associate endpoint and routes** pane. Select an endpoint and the routes with which you want to associate the domain. Then select **Associate** to update your configuration.
- :::image type="content" source="../media/how-to-add-custom-domain/associate-endpoint-routes.png" alt-text="Screenshot of associate endpoint and routes page.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/associate-endpoint-routes.png" alt-text="Screenshot that shows the Associate endpoint and routes pane.":::
- The Endpoint association status should change to reflect the endpoint to which the custom domain is currently associated.
+ The **Endpoint association** status should change to reflect the endpoint to which the custom domain is currently associated.
- :::image type="content" source="../media/how-to-add-custom-domain/endpoint-association-status.png" alt-text="Screenshot of endpoint association link.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/endpoint-association-status.png" alt-text="Screenshot that shows the Endpoint association link.":::
-1. Select the DNS state link.
+1. Select the **DNS state** link.
- :::image type="content" source="../media/how-to-add-custom-domain/dns-state-link.png" alt-text="Screenshot of DNS state link.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/dns-state-link.png" alt-text="Screenshot that shows the DNS state link.":::
> [!NOTE]
- > For an Azure pre-validated domain, go to the DNS hosting service and manually update the CNAME record for this domain from the other Azure service endpoint to Azure Front Door endpoint. This step is required, regardless of whether the domain is hosted with Azure DNS or with another DNS service. The link to update the CNAME from the DNS State column isn't available for this type of domain.
+ > For an Azure prevalidated domain, go to the DNS hosting service and manually update the CNAME record for this domain from the other Azure service endpoint to Azure Front Door endpoint. This step is required, regardless of whether the domain is hosted with Azure DNS or with another DNS service. The link to update the CNAME from the **DNS state** column isn't available for this type of domain.
-1. The **Add or update the CNAME record** page appears and displays the CNAME record information that must be provided before traffic can start flowing. If you're using Azure DNS hosted zones, the CNAME records can be created by selecting the **Add** button on the page. If you're using another DNS provider, you must manually enter the CNAME record name and value as shown on the page.
+1. The **Add or update the CNAME record** pane appears with the CNAME record information that must be provided before traffic can start flowing. If you're using Azure DNS hosted zones, the CNAME records can be created by selecting **Add** on the pane. If you're using another DNS provider, you must manually enter the CNAME record name and value as shown on the pane.
- :::image type="content" source="../media/how-to-add-custom-domain/add-update-cname-record.png" alt-text="Screenshot of add or update CNAME record.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/add-update-cname-record.png" alt-text="Screenshot that shows the Add or update the CNAME record pane.":::
-1. Once the CNAME record gets created and the custom domain is associated to the Azure Front Door endpoint, traffic starts flowing.
+1. After the CNAME record is created and the custom domain is associated with the Azure Front Door endpoint, traffic starts flowing.
> [!NOTE]
- > * If HTTPS is enabled, certificate provisioning and propagation may take a few minutes because propagation is being done to all edge locations.
- > * If your domain CNAME is indirectly pointed to a Front Door endpoint, for example, using Azure Traffic Manager for multi-CDN failover, the **DNS state** column shows as **CNAME/Alias record currently not detected**. Azure Front Door can't guarantee 100% detection of the CNAME record in this case. If you've configured an Azure Front Door endpoint to Azure Traffic Manager and still see this message, it doesnΓÇÖt mean you didn't set up correctly, therefore further no action is necessary from your side.
+ > * If HTTPS is enabled, certificate provisioning and propagation might take a few minutes because propagation is being done to all edge locations.
+ > * If your domain CNAME is indirectly pointed to an Azure Front Door endpoint, for example, by using Azure Traffic Manager for multi-CDN failover, the **DNS state** column shows as **CNAME/Alias record currently not detected**. Azure Front Door can't guarantee 100% detection of the CNAME record in this case. If you configured an Azure Front Door endpoint to Traffic Manager and still see this message, it doesn't mean that you didn't set up correctly. No further action is necessary from your side.
## Verify the custom domain
-After you've validated and associated the custom domain, verify that the custom domain is correctly referenced to your endpoint.
+After you validate and associate the custom domain, verify that the custom domain is correctly referenced to your endpoint.
-Lastly, validate that your application content is getting served using a browser.
+Lastly, validate that your application content is getting served by using a browser.
## Next steps * Learn how to [enable HTTPS for your custom domain](how-to-configure-https-custom-domain.md). * Learn more about [custom domains in Azure Front Door](../domain.md).
-* Learn about [End-to-end TLS with Azure Front Door](../end-to-end-tls.md).
+* Learn about [end-to-end TLS with Azure Front Door](../end-to-end-tls.md).
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Title: 'Configure HTTPS for your custom domain - Azure Front Door'
-description: In this article, you'll learn how to configure HTTPS on an Azure Front Door custom domain.
+description: In this article, you learn how to configure HTTPS on an Azure Front Door custom domain by using the Azure portal.
Last updated 10/31/2023
-#Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
+#Customer intent: As a website owner, I want to add a custom domain to my Azure Front Door configuration so that my users can use my custom domain to access my content.
-# Configure HTTPS on an Azure Front Door custom domain using the Azure portal
+# Configure HTTPS on an Azure Front Door custom domain by using the Azure portal
-Azure Front Door enables secure TLS delivery to your applications by default when you use your own custom domains. To learn more about custom domains, including how custom domains work with HTTPS, see [Domains in Azure Front Door](../domain.md).
+Azure Front Door enables secure Transport Layer Security (TLS) delivery to your applications by default when you use your own custom domains. To learn more about custom domains, including how custom domains work with HTTPS, see [Domains in Azure Front Door](../domain.md).
-Azure Front Door supports Azure-managed certificates and customer-managed certificates. In this article, you'll learn how to configure both types of certificates for your Azure Front Door custom domains.
+Azure Front Door supports Azure-managed certificates and customer-managed certificates. In this article, you learn how to configure both types of certificates for your Azure Front Door custom domains.
## Prerequisites * Before you can configure HTTPS for your custom domain, you must first create an Azure Front Door profile. For more information, see [Create an Azure Front Door profile](../create-front-door-portal.md).- * If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).- * If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
-## Azure Front Door-managed certificates for non-Azure pre-validated domains
+## Azure Front Door-managed certificates for non-Azure prevalidated domains
-Follow the steps below if you have your own domain, and the domain is not already associated with [another Azure service that pre-validates domains for Azure Front Door](../domain.md#domain-validation).
+If you have your own domain, and the domain isn't already associated with [another Azure service that prevalidates domains for Azure Front Door](../domain.md#domain-validation), follow these steps:
-1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain.
+1. Under **Settings**, select **Domains** for your Azure Front Door profile. Then select **+ Add** to add a new domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot of domain configuration landing page.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot that shows the domain configuration landing pane.":::
-1. On the **Add a domain** page, enter or select the following information, then select **Add** to onboard the custom domain.
+1. On the **Add a domain** pane, enter or select the following information. Then select **Add** to onboard the custom domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-domain-azure-managed.png" alt-text="Screenshot of add a domain page with Azure managed DNS selected.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-domain-azure-managed.png" alt-text="Screenshot that shows the Add a domain pane with Azure managed DNS selected.":::
| Setting | Value | |--|--|
- | Domain type | Select **Non-Azure pre-validated domain** |
- | DNS management | Select **Azure managed DNS (Recommended)** |
- | DNS zone | Select the **Azure DNS zone** that host the custom domain. |
+ | Domain type | Select **Non-Azure pre-validated domain**. |
+ | DNS management | Select **Azure managed DNS (Recommended)**. |
+ | DNS zone | Select the Azure DNS zone that hosts the custom domain. |
| Custom domain | Select an existing domain or add a new domain. |
- | HTTPS | Select **AFD Managed (Recommended)** |
+ | HTTPS | Select **AFD managed (Recommended)**. |
-1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
+1. Validate and associate the custom domain to an endpoint by following the steps to enable a [custom domain](how-to-add-custom-domain.md).
-1. After the custom domain is associated with an endpoint successfully, Azure Front Door generates a certificate and deploys it. This process may take from several minutes to an hour to complete.
+1. After the custom domain is successfully associated with an endpoint, Azure Front Door generates a certificate and deploys it. This process might take from several minutes to an hour to finish.
-## Azure-managed certificates for Azure pre-validated domains
+## Azure-managed certificates for Azure prevalidated domains
-Follow the steps below if you have your own domain, and the domain is associated with [another Azure service that pre-validates domains for Azure Front Door](../domain.md#domain-validation).
+If you have your own domain, and the domain is associated with [another Azure service that prevalidates domains for Azure Front Door](../domain.md#domain-validation), follow these steps:
-1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain.
+1. Under **Settings**, select **Domains** for your Azure Front Door profile. Then select **+ Add** to add a new domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot of domain configuration landing page.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot that shows the Domains landing pane.":::
-1. On the **Add a domain** page, enter or select the following information, then select **Add** to onboard the custom domain.
+1. On the **Add a domain** pane, enter or select the following information. Then select **Add** to onboard the custom domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-pre-validated-domain.png" alt-text="Screenshot of add a domain page with pre-validated domain.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-pre-validated-domain.png" alt-text="Screenshot that shows the Add a domain pane with a prevalidated domain.":::
| Setting | Value | |--|--|
- | Domain type | Select **Azure pre-validated domain** |
- | Pre-validated custom domain | Select a custom domain name from the drop-down list of Azure services. |
- | HTTPS | Select **Azure managed (Recommended)** |
+ | Domain type | Select **Azure pre-validated domain**. |
+ | Pre-validated custom domains | Select a custom domain name from the dropdown list of Azure services. |
+ | HTTPS | Select **Azure managed**. |
-1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
+1. Validate and associate the custom domain to an endpoint by following the steps to enable a [custom domain](how-to-add-custom-domain.md).
-1. Once the custom domain gets associated to endpoint successfully, an AFD managed certificate gets deployed to Front Door. This process may take from several minutes to an hour to complete.
+1. After the custom domain is successfully associated with an endpoint, an Azure Front Door-managed certificate gets deployed to Azure Front Door. This process might take from several minutes to an hour to finish.
-## Using your own certificate
+## Use your own certificate
You can also choose to use your own TLS certificate. Your TLS certificate must meet certain requirements. For more information, see [Certificate requirements](../domain.md?pivot=front-door-standard-premium#certificate-requirements). #### Prepare your key vault and certificate
-We recommend you create a separate Azure Key Vault to store your Azure Front Door TLS certificates. For more information, see [create an Azure Key Vault](../../key-vault/general/quick-create-portal.md). If you already a certificate, you can upload it to your new Azure Key Vault. Otherwise, you can create a new certificate through Azure Key Vault from one of the certificate authorities (CAs) partners.
+We recommend that you create a separate Azure Key Vault instance in which to store your Azure Front Door TLS certificates. For more information, see [Create a Key Vault instance](../../key-vault/general/quick-create-portal.md). If you already have a certificate, you can upload it to your new Key Vault instance. Otherwise, you can create a new certificate through Key Vault from one of the certificate authority (CA) partners.
> [!WARNING]
-> Azure Front Door currently only supports Azure Key Vault in the same subscription. Selecting an Azure Key Vault under a different subscription will result in a failure.
+> Azure Front Door currently only supports Key Vault in the same subscription. Selecting Key Vault under a different subscription results in a failure.
-> [!NOTE]
-> * Azure Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. Also, your certificate must have a complete certificate chain with leaf and intermediate certificates, and also the root certification authority (CA) must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
-> * We recommend using [**managed identity**](../managed-identity.md) to allow access to your Azure Key Vault certificates because App registration will be retired in the future.
+Other points to note about certificates:
+
+* Azure Front Door doesn't support certificates with elliptic curve cryptography algorithms. Also, your certificate must have a complete certificate chain with leaf and intermediate certificates. The root CA also must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
+* We recommend that you use [managed identity](../managed-identity.md) to allow access to your Key Vault certificates because app registration will be retired in the future.
#### Register Azure Front Door Register the service principal for Azure Front Door as an app in your Microsoft Entra ID by using Azure PowerShell or the Azure CLI. > [!NOTE]
-> * This action requires you to have *Global Administrator* permissions in Microsoft Entra ID. The registration only needs to be performed **once per Microsoft Entra tenant**.
-> * The application ID of **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** and **d4631ece-daab-479b-be77-ccb713491fc0** is predefined by Azure for Front Door Standard and Premium across all Azure tenants and subscriptions. Azure Front Door (Classic) has a different application ID.
+> * This action requires you to have Global Administrator permissions in Microsoft Entra ID. The registration only needs to be performed *once per Microsoft Entra tenant*.
+> * The application IDs of **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** and **d4631ece-daab-479b-be77-ccb713491fc0** are predefined by Azure for Azure Front Door Standard and Premium across all Azure tenants and subscriptions. Azure Front Door (classic) has a different application ID.
# [Azure PowerShell](#tab/powershell) 1. If needed, install [Azure PowerShell](/powershell/azure/install-azure-powershell) in PowerShell on your local machine.
-1. Use PowerShell, run the following command:
+1. Use PowerShell to run the following command:
- **Azure public cloud:**
+ Azure public cloud:
```azurepowershell-interactive New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8' ```
- **Azure government cloud:**
+ Azure government cloud:
```azurepowershell-interactive New-AzADServicePrincipal -ApplicationId 'd4631ece-daab-479b-be77-ccb713491fc0'
Register the service principal for Azure Front Door as an app in your Microsoft
# [Azure CLI](#tab/cli)
-1. If needed, install [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+1. If needed, install the [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
1. Use the Azure CLI to run the following command:
- **Azure public cloud:**
+ Azure public cloud:
```azurecli-interactive az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 ```
- **Azure government cloud:**
+ Azure government cloud:
```azurecli-interactive az ad sp create --id d4631ece-daab-479b-be77-ccb713491fc0 ```
-#### Grant Azure Front Door access to your Key Vault
+#### Grant Azure Front Door access to your key vault
-Grant Azure Front Door permission to access the certificates in your Azure Key Vault account. You only need to give **GET** permission to the certificate and secret in order for Azure Front Door to retrieve the certificate.
+Grant Azure Front Door permission to access the certificates in your Key Vault account. You only need to give `GET` permission to the certificate and secret in order for Azure Front Door to retrieve the certificate.
-1. In your key vault account, select **Access policies**.
+1. In your Key Vault account, select **Access policies**.
1. Select **Add new** or **Create** to create a new access policy.
-1. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
+1. In **Secret permissions**, select **Get** to allow Azure Front Door to retrieve the certificate.
-1. In **Certificate permissions**, select **Get** to allow Front Door to retrieve the certificate.
+1. In **Certificate permissions**, select **Get** to allow Azure Front Door to retrieve the certificate.
-1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and select **Microsoft.AzureFrontDoor-Cdn**. Select **Next**.
+1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** and select **Microsoft.AzureFrontDoor-Cdn**. Select **Next**.
1. In **Application**, select **Next**.
Azure Front Door can now access this key vault and the certificates it contains.
1. Return to your Azure Front Door Standard/Premium in the portal.
-1. Navigate to **Secrets** under *Settings* and select **+ Add certificate**.
+1. Under **Settings**, go to **Secrets** and select **+ Add certificate**.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate.png" alt-text="Screenshot of Azure Front Door secret landing page.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate.png" alt-text="Screenshot that shows the Azure Front Door secret landing pane.":::
-1. On the **Add certificate** page, select the checkbox for the certificate you want to add to Azure Front Door Standard/Premium.
+1. On the **Add certificate** pane, select the checkbox for the certificate you want to add to Azure Front Door Standard/Premium.
-1. When you select a certificate, you must [select the certificate version](../domain.md#rotate-own-certificate). If you select **Latest**, Azure Front Door will automatically update whenever the certificate is rotated (renewed). Alternatively, you can select a specific certificate version if you prefer to manage certificate rotation yourself.
+1. When you select a certificate, you must [select the certificate version](../domain.md#rotate-own-certificate). If you select **Latest**, Azure Front Door automatically updates whenever the certificate is rotated (renewed). You can also select a specific certificate version if you prefer to manage certificate rotation yourself.
- Leave the version selection as "Latest" and select **Add**.
+ Leave the version selection as **Latest** and select **Add**.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate-page.png" alt-text="Screenshot of add certificate page.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate-page.png" alt-text="Screenshot that shows the Add certificate pane.":::
-1. Once the certificate gets provisioned successfully, you can use it when you add a new custom domain.
+1. After the certificate gets provisioned successfully, you can use it when you add a new custom domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/successful-certificate-provisioned.png" alt-text="Screenshot of certificate successfully added to secrets.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/successful-certificate-provisioned.png" alt-text="Screenshot that shows the certificate successfully added to secrets.":::
-1. Navigate to **Domains** under *Setting* and select **+ Add** to add a new custom domain. On the **Add a domain** page, choose
-"Bring Your Own Certificate (BYOC)" for *HTTPS*. For *Secret*, select the certificate you want to use from the drop-down.
+1. Under **Settings**, go to **Domains** and select **+ Add** to add a new custom domain. On the **Add a domain** pane, for **HTTPS**, select **Bring Your Own Certificate (BYOC)**. For **Secret**, select the certificate you want to use from the dropdown list.
> [!NOTE]
- > The common name (CN) of the selected certificate must match the custom domain being added.
+ > The common name of the selected certificate must match the custom domain being added.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-custom-domain-https.png" alt-text="Screenshot of add a custom domain page with HTTPS.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-custom-domain-https.png" alt-text="Screenshot that shows the Add a custom domain pane with HTTPS.":::
-1. Follow the on-screen steps to validate the certificate. Then associate the newly created custom domain to an endpoint as outlined in [creating a custom domain](how-to-add-custom-domain.md) guide.
+1. Follow the onscreen steps to validate the certificate. Then associate the newly created custom domain to an endpoint as outlined in [Configure a custom domain](how-to-add-custom-domain.md).
## Switch between certificate types You can change a domain between using an Azure Front Door-managed certificate and a customer-managed certificate. For more information, see [Domains in Azure Front Door](../domain.md#switch-between-certificate-types).
-1. Select the certificate state to open the **Certificate details** page.
+1. Select the certificate state to open the **Certificate details** pane.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/domain-certificate.png" alt-text="Screenshot of certificate state on domains landing page.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/domain-certificate.png" alt-text="Screenshot that shows the certificate state on the Domains landing pane.":::
-1. On the **Certificate details** page, you can change between *Azure managed* and *Bring Your Own Certificate (BYOC)*.
+1. On the **Certificate details** pane, you can change between **Azure Front Door managed** and **Bring Your Own Certificate (BYOC)**.
- If you select *Bring Your Own Certificate (BYOC)*, follow the steps described above to select a certificate.
+ If you select **Bring Your Own Certificate (BYOC)**, follow the preceding steps to select a certificate.
1. Select **Update** to change the associated certificate with a domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/certificate-details-page.png" alt-text="Screenshot of certificate details page.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/certificate-details-page.png" alt-text="Screenshot that shows the Certificate details pane.":::
## Next steps * Learn about [caching with Azure Front Door Standard/Premium](../front-door-caching.md). * [Understand custom domains](../domain.md) on Azure Front Door.
-* Learn about [End-to-end TLS with Azure Front Door](../end-to-end-tls.md).
+* Learn about [end-to-end TLS with Azure Front Door](../end-to-end-tls.md).
governance Definition Structure Policy Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-policy-rule.md
In the `then` block, you define the effect that happens when the `if conditions
For more information about _policyRule_, go to the [policy definition schema](https://schema.management.azure.com/schemas/2020-10-01/policyDefinition.json).
-### Logical operators
+## Logical operators
Supported logical operators are:
hdinsight-aks Azure Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-iot-hub.md
Title: Process real-time IoT data on Apache Flink® with Azure HDInsight on AKS
-description: How to integrate Azure IoT Hub and Apache Flink®
+description: How to integrate Azure IoT Hub and Apache Flink®.
Previously updated : 10/03/2023 Last updated : 04/04/2024 # Process real-time IoT data on Apache Flink® with Azure HDInsight on AKS Azure IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communication between an IoT application and its attached devices. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT hub.
-## Prerequisites
-
-1. [Create an Azure IoTHub](/azure/iot-hub/iot-hub-create-through-portal/)
-2. [Create Flink cluster on HDInsight on AKS](./flink-create-cluster-portal.md)
+In this example, the code processes real-time IoT data on Apache Flink® with Azure HDInsight on AKS and sinks to ADLS gen2 storage.
-## Configure Flink cluster
+## Prerequisites
-Add ABFS storage account keys in your Flink cluster's configuration.
+* [Create an Azure IoTHub](/azure/iot-hub/iot-hub-create-through-portal/)
+* [Create Flink cluster 1.17.0 on HDInsight on AKS](./flink-create-cluster-portal.md)
+* Use MSI to access ADLS Gen2
+* IntelliJ for development
-Add the following configurations:
+> [!NOTE]
+> For this demonstration, we are using a Window VM as maven project develop env in the same VNET as HDInsight on AKS.
-`fs.azure.account.key.<your storage account's dfs endpoint> = <your storage account's shared access key>`
+## Flink cluster 1.17.0 on HDInsight on AKS
:::image type="content" source="./media/azure-iot-hub/configuration-management.png" alt-text="Diagram showing search bar in Azure portal." lightbox="./media/azure-iot-hub/configuration-management.png":::
-## Writing the Flink job
-
-### Set up configuration for ABFS
-
-```java
-Properties props = new Properties();
-props.put(
- "fs.azure.account.key.<your storage account's dfs endpoint>",
- "<your storage account's shared access key>"
-);
-
-Configuration conf = ConfigurationUtils.createConfiguration(props);
+## Azure IOT Hub on Azure portal
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(conf);
+Within the connection string, you can find a service bus URL (URL of the underlying event hub namespace), which you need to add as a bootstrap server in your Kafka source. In this example, it's `iothub-ns-contosoiot-55642726-4642a54853.servicebus.windows.net:9093`.
-```
+## Prepare message into Azure IOT device
-This set up is required for Flink to authenticate with your ABFS storage account to write data to it.
+Each IoT hub comes with built-in system endpoints to handle system and device messages.
-### Defining the IoT Hub source
+For more information, see [How to use VS Code as IoT Hub Device Simulator](https://devblogs.microsoft.com/iotdev/use-vs-code-as-iot-hub-device-simulator-say-hello-to-azure-iot-hub-in-5-minutes/).
-IoTHub is build on top of event hub and hence supports a kafka-like API. So in our Flink job, we can define a `KafkaSource` with appropriate parameters to consume messages from IoTHub.
-```java
-String connectionString = "<your iot hub connection string>";
-KafkaSource<String> source = KafkaSource.<String>builder()
- .setBootstrapServers("<your iot hub's service bus url>:9093")
- .setTopics("<name of your iot hub>")
- .setGroupId("$Default")
- .setProperty("partition.discovery.interval.ms", "10000")
- .setProperty("security.protocol", "SASL_SSL")
- .setProperty("sasl.mechanism", "PLAIN")
- .setProperty("sasl.jaas.config", String.format("org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"%s\";", connectionString))
- .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
- .setValueOnlyDeserializer(new SimpleStringSchema())
- .build();
+## Code in Flink
-DataStream<String> kafka = env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source");
-kafka.print();
-```
+`IOTdemo.java`
-The connection string for IoT Hub can be found here -
-
+- KafkaSource:
+IoTHub is build on top of event hub and hence supports a kafka-like API. So in our Flink job, we can define a KafkaSource with appropriate parameters to consume messages from IoTHub.
-Within the connection string, you can find a service bus URL (URL of the underlying event hub namespace), which you need to add as a bootstrap server in your kafka source. In this case, it is: `iothub-ns-sagiri-iot-25146639-20dff4e426.servicebus.windows.net:9093`
+- FileSink:
+Define the ABFS sink.
-### Defining the ABFS sink
-```java
-String outputPath = "abfs://<container name>@<your storage account's dfs endpoint>";
-
-final FileSink<String> sink = FileSink
- .forRowFormat(new Path(outputPath), new SimpleStringEncoder<String>("UTF-8"))
- .withRollingPolicy(
- DefaultRollingPolicy.builder()
- .withRolloverInterval(Duration.ofMinutes(2))
- .withInactivityInterval(Duration.ofMinutes(3))
- .withMaxPartSize(MemorySize.ofMebiBytes(5))
- .build())
- .build();
-
-kafka.sinkTo(sink);
```-
-### Flink job code
-
-```java
-package org.example;
-
-import java.time.Duration;
-import java.util.Properties;
+package contoso.example
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.serialization.SimpleStringEncoder;
-import org.apache.flink.configuration.Configuration;
-import org.apache.flink.configuration.ConfigurationUtils;
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.client.program.StreamContextEnvironment;
import org.apache.flink.configuration.MemorySize; import org.apache.flink.connector.file.sink.FileSink;
-import org.apache.flink.core.fs.Path;
-import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
-import org.apache.flink.streaming.api.datastream.DataStream;
-import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.connector.kafka.source.KafkaSource; import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer;
-import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.DefaultRollingPolicy; import org.apache.kafka.clients.consumer.OffsetResetStrategy;
-public class StreamingJob {
- public static void main(String[] args) throws Throwable {
-
- Properties props = new Properties();
- props.put(
- "fs.azure.account.key.<your storage account's dfs endpoint>",
- "<your storage account's shared access key>"
- );
-
- Configuration conf = ConfigurationUtils.createConfiguration(props);
+import java.time.Duration;
+public class IOTdemo {
- StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(conf);
+ public static void main(String[] args) throws Exception {
- String connectionString = "<your iot hub connection string>";
+ // create execution environment
+ StreamExecutionEnvironment env = StreamContextEnvironment.getExecutionEnvironment();
-
- KafkaSource<String> source = KafkaSource.<String>builder()
- .setBootstrapServers("<your iot hub's service bus url>:9093")
- .setTopics("<name of your iot hub>")
- .setGroupId("$Default")
- .setProperty("partition.discovery.interval.ms", "10000")
- .setProperty("security.protocol", "SASL_SSL")
- .setProperty("sasl.mechanism", "PLAIN")
- .setProperty("sasl.jaas.config", String.format("org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"%s\";", connectionString))
- .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
- .setValueOnlyDeserializer(new SimpleStringSchema())
- .build();
+ String connectionString = "<your iot hub connection string>";
+ KafkaSource<String> source = KafkaSource.<String>builder()
+ .setBootstrapServers("<your iot hub's service bus url>:9093")
+ .setTopics("<name of your iot hub>")
+ .setGroupId("$Default")
+ .setProperty("partition.discovery.interval.ms", "10000")
+ .setProperty("security.protocol", "SASL_SSL")
+ .setProperty("sasl.mechanism", "PLAIN")
+ .setProperty("sasl.jaas.config", String.format("org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"%s\";", connectionString))
+ .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
+ .setValueOnlyDeserializer(new SimpleStringSchema())
+ .build();
- DataStream<String> kafka = env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source");
- kafka.print();
+ DataStream<String> kafka = env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source");
- String outputPath = "abfs://<container name>@<your storage account's dfs endpoint>";
+ String outputPath = "abfs://<container>@<account_name>.dfs.core.windows.net/flink/data/azureiothubmessage/";
- final FileSink<String> sink = FileSink
- .forRowFormat(new Path(outputPath), new SimpleStringEncoder<String>("UTF-8"))
- .withRollingPolicy(
- DefaultRollingPolicy.builder()
- .withRolloverInterval(Duration.ofMinutes(2))
- .withInactivityInterval(Duration.ofMinutes(3))
- .withMaxPartSize(MemorySize.ofMebiBytes(5))
- .build())
- .build();
+ final FileSink<String> sink = FileSink
+ .forRowFormat(new Path(outputPath), new SimpleStringEncoder<String>("UTF-8"))
+ .withRollingPolicy(
+ DefaultRollingPolicy.builder()
+ .withRolloverInterval(Duration.ofMinutes(2))
+ .withInactivityInterval(Duration.ofMinutes(3))
+ .withMaxPartSize(MemorySize.ofMebiBytes(5))
+ .build())
+ .build();
- kafka.sinkTo(sink);
+ kafka.sinkTo(sink);
- env.execute("Azure-IoTHub-Flink-ABFS");
- }
+ env.execute("Sink Azure IOT hub to ADLS gen2");
+ }
}- ```
-#### Maven dependencies
+**Maven pom.xml**
```xml
-<dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-java</artifactId>
- <version>${flink.version}</version>
-</dependency>
-<dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-streaming-java</artifactId>
- <version>${flink.version}</version>
-</dependency>
-<dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-streaming-scala_2.12</artifactId>
- <version>${flink.version}</version>
-</dependency>
-<dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-clients</artifactId>
- <version>${flink.version}</version>
-</dependency>
-<dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-connector-kafka</artifactId>
- <version>${flink.version}</version>
-</dependency>
-<dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-connector-files</artifactId>
- <version>${flink.version}</version>
-</dependency>
+ <groupId>contoso.example</groupId>
+ <artifactId>FlinkIOTDemo</artifactId>
+ <version>1.0-SNAPSHOT</version>
+ <properties>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ <flink.version>1.17.0</flink.version>
+ <java.version>1.8</java.version>
+ <scala.binary.version>2.12</scala.binary.version>
+ </properties>
+ <dependencies>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-java</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-streaming-java</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-clients</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-files -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-connector-files</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-connector-kafka</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ </dependencies>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-assembly-plugin</artifactId>
+ <version>3.0.0</version>
+ <configuration>
+ <appendAssemblyId>false</appendAssemblyId>
+ <descriptorRefs>
+ <descriptorRef>jar-with-dependencies</descriptorRef>
+ </descriptorRefs>
+ </configuration>
+ <executions>
+ <execution>
+ <id>make-assembly</id>
+ <phase>package</phase>
+ <goals>
+ <goal>single</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ </plugins>
+ </build>
+</project>
```
+## Package the jar and submit the job in Flink cluster
+
+Upload the jar into webssh pod and submit the jar.
+
+```
+user@sshnode-0 [ ~ ]$ bin/flink run -c IOTdemo -j FlinkIOTDemo-1.0-SNAPSHOT.jar
+SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
+SLF4J: Defaulting to no-operation (NOP) logger implementation
+SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
+Job has been submitted with JobID de1931b1c1179e7530510b07b7ced858
+```
+## Check job on Flink Dashboard UI
-### Submit job
-Submit job using HDInsight on AKS's [Flink job submission API](./flink-job-management.md)
+## Check Result on ADLS gen2 on Azure portal
### Reference - [Apache Flink Website](https://flink.apache.org/)-- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Prerequisites Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/prerequisites-resources.md
Title: Resource prerequisites for Azure HDInsight on AKS
description: Prerequisite steps to complete for Azure resources before working with HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 04/08/2024 # Resource prerequisites
For example, if you provide resource prefix as ΓÇ£demoΓÇ¥ then, following resour
|Trino|**Create the resources mentioned as follows:** <br> 1. Managed Service Identity (MSI): user-assigned managed identity. <br><br> [![Deploy Trino to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fhdinsight-aks%2Fmain%2FARM%2520templates%2FprerequisitesTrino.json)| |Flink |**Create the resources mentioned as follows:** <br> 1. Managed Service Identity (MSI): user-assigned managed identity. <br> 2. ADLS Gen2 storage account and a container. <br><br> **Role assignments:** <br> 1. Assigns ΓÇ£Storage Blob Data OwnerΓÇ¥ role to user-assigned MSI on storage account. <br><br> [![Deploy Apache Flink to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fhdinsight-aks%2Fmain%2FARM%2520templates%2FprerequisitesFlink.json)| |Spark| **Create the resources mentioned as follows:** <br> 1. Managed Service Identity (MSI): user-assigned managed identity. <br> 2. ADLS Gen2 storage account and a container. <br><br> **Role assignments:** <br> 1. Assigns ΓÇ£Storage Blob Data OwnerΓÇ¥ role to user-assigned MSI on storage account. <br><br> [![Deploy Spark to Azure](https://aka.ms/deploytoazurebutton)]( https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fhdinsight-aks%2Fmain%2FARM%2520templates%2FprerequisitesSpark.json)|
-|Trino, Flink, or Spark with Hive Metastore (HMS)|**Create the resources mentioned as follows:** <br> 1. Managed Service Identity (MSI): user-assigned managed identity. <br> 2. ADLS Gen2 storage account and a container. <br> 3. Azure Key Vault and a secret to store SQL Server admin credentials. <br><br> **Role assignments:** <br> 1. Assigns ΓÇ£Storage Blob Data OwnerΓÇ¥ role to user-assigned MSI on storage account. <br> 2. Assigns ΓÇ£Key Vault Secrets UserΓÇ¥ role to user-assigned MSI on Key Vault. <br><br> [![Deploy Trino HMS to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fhdinsight-aks%2Fmain%2FARM%2520templates%2Fprerequisites_WithHMS.json)|
+|Trino, Flink, or Spark with Hive Metastore (HMS)|**Create the resources mentioned as follows:** <br> 1. Managed Service Identity (MSI): user-assigned managed identity. <br> 2. ADLS Gen2 storage account and a container. <br> 3. Azure SQL Server and SQL Database. <br> 4. Azure Key Vault and a secret to store SQL Server admin credentials. <br><br> **Role assignments:** <br> 1. Assigns ΓÇ£Storage Blob Data OwnerΓÇ¥ role to user-assigned MSI on storage account. <br> 2. Assigns ΓÇ£Key Vault Secrets UserΓÇ¥ role to user-assigned MSI on Key Vault. <br><br> [![Deploy Trino HMS to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fhdinsight-aks%2Fmain%2FARM%2520templates%2Fprerequisites_WithHMS.json)|
> [!NOTE] > Using these ARM templates require a user to have permission to create new resources and assign roles to the resources in the subscription.
For example, if you provide resource prefix as ΓÇ£demoΓÇ¥ then, following resour
#### [Create user-assigned managed identity (MSI)](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity)
- A managed identity is an identity registered in Microsoft Entra ID [(Microsoft Entra ID)](https://www.microsoft.com/security/business/identity-access/azure-active-directory) whose credentials managed by Azure. With managed identities, you need not register service principals in Microsoft Entra ID to maintain credentials such as certificates.
+ A managed identity is an identity registered in Microsoft Entra ID [(Microsoft Entra ID)](https://www.microsoft.com/security/business/identity-access/azure-active-directory) whose credentials managed by Azure. With managed identities, you need not to register service principals in Microsoft Entra ID to maintain credentials such as certificates.
HDInsight on AKS relies on user-assigned MSI for communication among different components.
hdinsight Apache Esp Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-esp-kafka-ssl-encryption-authentication.md
Title: Apache Kafka TLS encryption & authentication for ESP Kafka Clusters - Azure HDInsight
-description: Set up TLS encryption for communication between Kafka clients and Kafka brokers, Set up SSL authentication of clients for ESP Kafka clusters
+description: Set up TLS encryption for communication between Kafka clients and Kafka brokers, Set up SSL authentication of clients for ESP Kafka clusters.
Previously updated : 04/03/2023 Last updated : 04/08/2024 # Set up TLS encryption and authentication for ESP Apache Kafka cluster in Azure HDInsight
The summary of the broker setup process is as follows:
1. Once you have all of the certificates, put the certs into the cert store. 1. Go to Ambari and change the configurations.
-Use the following detailed instructions to complete the broker setup:
+ Use the following detailed instructions to complete the broker setup:
-> [!Important]
-> In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with `wn0`, `wn1` or `wn2` as appropriate. `WorkerNode0_Name` and `HeadNode0_Name` should be substituted with the names of the respective machines.
+ > [!Important]
+ > In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with `wn0`, `wn1` or `wn2` as appropriate. `WorkerNode0_Name` and `HeadNode0_Name` should be substituted with the names of the respective machines.
1. Perform initial setup on head node 0, which for HDInsight fills the role of the Certificate Authority (CA).
Use the following detailed instructions to complete the broker setup:
1. SCP the certificate signing request to the CA (headnode0) ```bash
- keytool -genkey -keystore kafka.server.keystore.jks -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -storetype pkcs12
+ keytool -genkey -keystore kafka.server.keystore.jks -keyalg RSA -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -ext SAN=DNS:FQDN_WORKER_NODE -storetype pkcs12
keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass "MyServerPassword123" -keypass "MyServerPassword123" scp cert-file sshuser@HeadNode0_Name:~/ssl/wnX-cert-sign-request ```
To complete the configuration modification, do the following steps:
1. Under **Kafka Broker** set the **listeners** property to `PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093` 1. Under **Advanced kafka-broker** set the **security.inter.broker.protocol** property to `SASL_SSL`
- :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-with-sasl.png" alt-text="Screenshot showing how to edit Kafka sasl configuration properties in Ambari." border="true":::
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-with-sasl.png" alt-text="Screenshot showing how to edit Kafka configuration properties in Ambari." border="true":::
1. Under **Custom kafka-broker** set the **ssl.client.auth** property to `required`.
To complete the configuration modification, do the following steps:
> 1. ssl.keystore.location and ssl.truststore.location is the complete path of your keystore, truststore location in Certificate Authority (hn0) > 1. ssl.keystore.password and ssl.truststore.password is the password set for the keystore and truststore. In this case as an example,` MyServerPassword123` > 1. ssl.key.password is the key set for the keystore and trust store. In this case as an example, `MyServerPassword123`
-
- For HDI version 4.0 or 5.0
-
- a. If you're setting up authentication and encryption, then the screenshot looks like
- :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-authentication-as-required.png" alt-text="Screenshot showing how to edit Kafka-env template property in Ambari authentication as required." border="true":::
-
- b. If you are setting up encryption only, then the screenshot looks like
+1. To Use TLS 1.3 in Kafka, add following configs to the Kafka configs in Ambari.
+ 1. `ssl.enabled.protocols=TLSv1.3`
+ 1. `ssl.protocol=TLSv1.3`
+
+ > [!Important]
+ > 1. TLS 1.3 works with HDI 5.1 kafka version only.
+ > 1. If you use TLS 1.3 at server side, you should use TLS 1.3 configs at client too.
+
+1. For HDI version 4.0 or 5.0
+ 1. If you're setting up authentication and encryption, then the screenshot looks like
+
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-authentication-as-required.png" alt-text="Screenshot showing how to edit Kafka-env template property in Ambari authentication as required." border="true":::
+
+ 1. If you are setting up encryption only, then the screenshot looks like
- :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-authentication-as-none.png" alt-text="Screenshot showing how to edit Kafka-env template property in Ambari authentication as none." border="true":::
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-authentication-as-none.png" alt-text="Screenshot showing how to edit Kafka-env template property in Ambari authentication as none." border="true":::
1. Restart all Kafka brokers.
These steps are detailed in the following code snippets.
ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks ssl.truststore.password=MyClientPassword123 ```
+ 1. To Use TLS 1.3 add following configs to file `client-ssl-auth.properties`
+ ```config
+ ssl.enabled.protocols=TLSv1.3
+ ssl.protocol=TLSv1.3
+ ```
1. Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093. Refer to [Verification](apache-kafka-ssl-encryption-authentication.md#verification) section for steps needed to verify the setup using console producer/consumer.
The details of each step are given.
cd ssl ```
-1. Create client store with signed cert, and import CA certificate into the keystore and truststore on client machine (hn1):
+1. Create client store with signed certificate, and import CA certificate into the keystore, and truststore on client machine (hn1):
```bash keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
The details of each step are given.
ssl.key.password=MyClientPassword123 ```
+ 1. To Use TLS 1.3 add following configs to file `client-ssl-auth.properties`
+ ```config
+ ssl.enabled.protocols=TLSv1.3
+ ssl.protocol=TLSv1.3
+ ```
## Verification
Run these steps on the client machine.
### Kafka 2.1 or above > [!Note]
-> Below commands will work if you are either using `kafka` user or a custom user which have access to do CRUD operation.
+> Below commands will work if you're either using `kafka` user or a custom user which have access to do CRUD operation.
:::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/access-to-crud-operation.png" alt-text="Screenshot showing how to provide access CRUD operations." border="true":::
Using Command Line Tool
1. `klist`
- If ticket is present, then you are good to proceed. Otherwise generate a Kerberos principle and keytab using below command.
+ If ticket is present, then you're good to proceed. Otherwise generate a Kerberos principle and keytab using below command.
1. `ktutil`
hdinsight Apache Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md
description: Set up TLS encryption for communication between Kafka clients and K
Previously updated : 02/20/2024 Last updated : 04/08/2024
-# Set up TLS encryption and authentication for Non ESP Apache Kafka cluster in Azure HDInsight
+# Set up TLS encryption and authentication for Non-ESP Apache Kafka cluster in Azure HDInsight
This article shows you how to set up Transport Layer Security (TLS) encryption, previously known as Secure Sockets Layer (SSL) encryption, between Apache Kafka clients and Apache Kafka brokers. It also shows you how to set up authentication of clients (sometimes referred to as two-way TLS).
The summary of the broker setup process is as follows:
1. Once you have all of the certificates, put the certs into the cert store. 1. Go to Ambari and change the configurations.
-Use the following detailed instructions to complete the broker setup:
-
-> [!Important]
-> In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with `wn0`, `wn1` or `wn2` as appropriate. `WorkerNode0_Name` and `HeadNode0_Name` should be substituted with the names of the respective machines.
+ Use the following detailed instructions to complete the broker setup:
+ > [!Important]
+ > In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with `wn0`, `wn1` or `wn2` as appropriate. `WorkerNode0_Name` and `HeadNode0_Name` should be substituted with the names of the respective machines.
+
1. Perform initial setup on head node 0, which for HDInsight fills the role of the Certificate Authority (CA). ```bash
Use the following detailed instructions to complete the broker setup:
1. SCP the certificate signing request to the CA (headnode0) ```bash
- keytool -genkey -keystore kafka.server.keystore.jks -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -storetype pkcs12
+ keytool -genkey -keystore kafka.server.keystore.jks -keyalg RSA -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -ext SAN=DNS:FQDN_WORKER_NODE -storetype pkcs12
keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass "MyServerPassword123" -keypass "MyServerPassword123" scp cert-file sshuser@HeadNode0_Name:~/ssl/wnX-cert-sign-request ```
To complete the configuration modification, do the following steps:
> 1. ssl.keystore.password and ssl.truststore.password is the password set for the keystore and truststore. In this case as an example, `MyServerPassword123` > 1. ssl.key.password is the key set for the keystore and trust store. In this case as an example, `MyServerPassword123`
+1. To Use TLS 1.3 in Kafka
+
+ Add following configs to the kafka configs in Ambari
+ > 1. `ssl.enabled.protocols=TLSv1.3`
+ > 1. `ssl.protocol=TLSv1.3`
+ >
+ > [!Important]
+ > 1. TLS 1.3 works with HDI 5.1 kafka version only.
+ > 1. If you use TLS 1.3 at server side, you should use TLS 1.3 configs at client too.
- For HDI version 4.0 or 5.0
+1. For HDI version 4.0 or 5.0
1. If you're setting up authentication and encryption, then the screenshot looks like
- :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four.png" alt-text="Editing kafka-env template property in Ambari four." border="true":::
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four.png" alt-text="Editing kafka-env template property in Ambari four." border="true":::
- 1. If you are setting up encryption only, then the screenshot looks like
+ 1. If you're setting up encryption only, then the screenshot looks like
- :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four-encryption-only.png" alt-text="Screenshot showing how to edit kafka-env template property field in Ambari for encryption only." border="true":::
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four-encryption-only.png" alt-text="Screenshot showing how to edit kafka-env template property field in Ambari for encryption only." border="true":::
- 1. Restart all Kafka brokers. + ## Client setup (without authentication) If you don't need authentication, the summary of the steps to set up only TLS encryption are:
These steps are detailed in the following code snippets.
ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks ssl.truststore.password=MyClientPassword123 ```
+ 1. To Use TLS 1.3 add following configs to file `client-ssl-auth.properties`
+ ```config
+ ssl.enabled.protocols=TLSv1.3
+ ssl.protocol=TLSv1.3
+ ```
1. Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093. Refer to [Verification](apache-kafka-ssl-encryption-authentication.md#verification) section for steps needed to verify the setup using console producer/consumer. + ## Client setup (with authentication) > [!Note]
The details of each step are given.
cd ssl ```
-1. Create client store with signed cert, and import ca cert into the keystore and truststore on client machine (hn1):
+1. Create client store with signed cert, import CA cert into the keystore, and truststore on client machine (hn1):
```bash keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
The details of each step are given.
ssl.keystore.password=MyClientPassword123 ssl.key.password=MyClientPassword123 ```
+ 1. To Use TLS 1.3 add following configs to file `client-ssl-auth.properties`
+ ```config
+ ssl.enabled.protocols=TLSv1.3
+ ssl.protocol=TLSv1.3
+ ```
## Verification
hdinsight Apache Spark Machine Learning Mllib Ipython https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-machine-learning-mllib-ipython.md
description: Learn how to use Spark MLlib to create a machine learning app that
Previously updated : 06/23/2023 Last updated : 04/08/2024 # Use Apache Spark MLlib to build a machine learning application and analyze a dataset
-Learn how to use Apache Spark MLlib to create a machine learning application. The application will do predictive analysis on an open dataset. From Spark's built-in machine learning libraries, this example uses *classification* through logistic regression.
+Learn how to use Apache Spark MLlib to create a machine learning application. The application does predictive analysis on an open dataset. From Spark's built-in machine learning libraries, this example uses *classification* through logistic regression.
MLlib is a core Spark library that provides many utilities useful for machine learning tasks, such as:
Logistic regression is the algorithm that you use for classification. Spark's lo
In summary, the process of logistic regression produces a *logistic function*. Use the function to predict the probability that an input vector belongs in one group or the other.
-## Predictive analysis example on food inspection data
+## Predictive analysis example of food inspection data
In this example, you use Spark to do some predictive analysis on food inspection data (**Food_Inspections1.csv**). Data acquired through the [City of Chicago data portal](https://data.cityofchicago.org/). This dataset contains information about food establishment inspections that were conducted in Chicago. Including information about each establishment, the violations found (if any), and the results of the inspection. The CSV data file is already available in the storage account associated with the cluster at **/HdiSamples/HdiSamples/FoodInspectionData/Food_Inspections1.csv**.
-In the steps below, you develop a model to see what it takes to pass or fail a food inspection.
+In the following steps, you develop a model to see what it takes to pass or fail a food inspection.
## Create an Apache Spark MLlib machine learning app
Use the Spark context to pull the raw CSV data into memory as unstructured text.
```PySpark def csvParse(s): import csv
- from StringIO import StringIO
+ from io import StringIO
sio = StringIO(s)
- value = csv.reader(sio).next()
+ value = next(csv.reader(sio))
sio.close() return value
Let's start to get a sense of what the dataset contains.
## Create a logistic regression model from the input dataframe
-The final task is to convert the labeled data. Convert the data into a format that can be analyzed by logistic regression. The input to a logistic regression algorithm needs a set of *label-feature vector pairs*. Where the "feature vector" is a vector of numbers that represent the input point. So, you need to convert the "violations" column, which is semi-structured and contains many comments in free-text. Convert the column to an array of real numbers that a machine could easily understand.
+The final task is to convert the labeled data. Convert the data into a format that analyzed by logistic regression. The input to a logistic regression algorithm needs a set of *label-feature vector pairs*. Where the "feature vector" is a vector of numbers that represent the input point. So, you need to convert the "violations" column, which is semi-structured and contains many comments in free-text. Convert the column to an array of real numbers that a machine could easily understand.
-One standard machine learning approach for processing natural language is to assign each distinct word an "index". Then pass a vector to the machine learning algorithm. Such that each index's value contains the relative frequency of that word in the text string.
+One standard machine learning approach for processing natural language is to assign each distinct word an index. Then pass a vector to the machine learning algorithm. Such that each index's value contains the relative frequency of that word in the text string.
-MLlib provides an easy way to do this operation. First, "tokenize" each violations string to get the individual words in each string. Then, use a `HashingTF` to convert each set of tokens into a feature vector that can then be passed to the logistic regression algorithm to construct a model. You conduct all of these steps in sequence using a "pipeline".
+MLlib provides an easy way to do this operation. First, "tokenize" each violations string to get the individual words in each string. Then, use a `HashingTF` to convert each set of tokens into a feature vector that can then be passed to the logistic regression algorithm to construct a model. You conduct all of these steps in sequence using a pipeline.
```PySpark tokenizer = Tokenizer(inputCol="violations", outputCol="words")
model = pipeline.fit(labeledData)
## Evaluate the model using another dataset
-You can use the model you created earlier to *predict* what the results of new inspections will be. The predictions are based on the violations that were observed. You trained this model on the dataset **Food_Inspections1.csv**. You can use a second dataset, **Food_Inspections2.csv**, to *evaluate* the strength of this model on the new data. This second data set (**Food_Inspections2.csv**) is in the default storage container associated with the cluster.
+You can use the model you created earlier to *predict* what the results of new inspections are. The predictions are based on the violations that were observed. You trained this model on the dataset **Food_Inspections1.csv**. You can use a second dataset, **Food_Inspections2.csv**, to *evaluate* the strength of this model on the new data. This second data set (**Food_Inspections2.csv**) is in the default storage container associated with the cluster.
1. Run the following code to create a new dataframe, **predictionsDf** that contains the prediction generated by the model. The snippet also creates a temporary table called **Predictions** based on the dataframe.
You can use the model you created earlier to *predict* what the results of new i
results = 'Pass w/ Conditions'))""").count() numInspections = predictionsDf.count()
- print "There were", numInspections, "inspections and there were", numSuccesses, "successful predictions"
- print "This is a", str((float(numSuccesses) / float(numInspections)) * 100) + "%", "success rate"
+ print ("There were", numInspections, "inspections and there were", numSuccesses, "successful predictions")
+ print ("This is a", str((float(numSuccesses) / float(numInspections)) * 100) + "%", "success rate")
``` The output looks like the following text:
You can now construct a final visualization to help you reason about the results
## Shut down the notebook
-After you have finished running the application, you should shut down the notebook to release the resources. To do so, from the **File** menu on the notebook, select **Close and Halt**. This action shuts down and closes the notebook.
+After running the application, you should shut down the notebook to release the resources. To do so, from the **File** menu on the notebook, select **Close and Halt**. This action shuts down and closes the notebook.
## Next steps
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
If the device gets any of the following errors when it connects, it should use a
To learn more about device error codes, see [Troubleshooting device connections](troubleshooting.md).
-To learn more about implementing automatic reconnections, see [Manage device reconnections to create resilient applications](../../iot-develop/concepts-manage-device-reconnections.md).
+To learn more about implementing automatic reconnections, see [Manage device reconnections to create resilient applications](../../iot/concepts-manage-device-reconnections.md).
### Test failover capabilities
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
Title: Create an IoT Central application
-description: How to create an IoT Central application by using the Azure IoT Central site, the Azure portal, or a command-line environment.
+description: How to create an IoT Central application by using the Azure portal or a command-line environment.
Previously updated : 07/14/2023 Last updated : 04/03/2024 # Create an IoT Central application
-You have several ways to create an IoT Central application. You can use one of the GUI-based methods if you prefer a manual approach, or one of the CLI or programmatic methods if you want to automate the process.
+There are multiple ways to create an IoT Central application. You can use a GUI-based method if you prefer a manual approach, or one of the CLI or programmatic methods if you need to automate the process.
Whichever approach you choose, the configuration options are the same, and the process typically takes less than a minute to complete. [!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
-To learn how to manage IoT Central application by using the IoT Central REST API, see [Use the REST API to create and manage IoT Central applications.](../core/howto-manage-iot-central-with-rest-api.md)
+Other approaches, not described in this article include:
-## Options
+- [Use the REST API to create and manage IoT Central applications.](../core/howto-manage-iot-central-with-rest-api.md).
+- [Create and manage an Azure IoT Central application from the Microsoft Cloud Solution Provider portal](howto-create-and-manage-applications-csp.md).
-This section describes the available options when you create an IoT Central application. Depending on the method you choose, you might need to supply the options on a form or as command-line parameters:
+## Parameters
-### Pricing plans
+This section describes the available parameters when you create an IoT Central application. Depending on the method you choose to create your application, you might need to supply the parameter values on a web form or at the command-line. In some cases, there are default values that you can use:
-The *standard* plans:
+### Pricing plan
+
+The _standard_ plans:
-- You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md). - Let you create and manage IoT Central applications using any of the available methods. - Let you connect as many devices as you need. You're billed by device. To learn more, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). - Can be upgraded or downgraded to other standard plans.
The _subdomain_ you choose uniquely identifies your application. The subdomain i
### Application template ID
-The application template you choose determines the initial contents of your application, such as dashboards and device templates. The template ID For a custom application, use `iotc-pnp-preview` as the template ID.
+The application template you choose determines the initial contents of your application, such as dashboards and device templates. For a custom application, use `iotc-pnp-preview` as the template ID.
+
+The following table lists the available application templates:
+ ### Billing information
If you choose one of the standard plans, you need to provide billing information
- The Azure subscription you're using. - The directory that contains the subscription you're using.-- The location to host your application. IoT Central uses Azure regions as locations: Australia East, Canada Central, Central US, East US, East US 2, Japan East, North Europe, South Central US, Southeast Asia, UK South, West Europe, and West US.
-## Azure portal
+### Location
-The easiest way to get started creating IoT Central applications is in the [Azure portal](https://portal.azure.com/#create/Microsoft.IoTCentral).
+The location to host your application. IoT Central uses Azure regions as locations. Currently, you can choose from: Australia East, Canada Central, Central US, East US, East US 2, Japan East, North Europe, South Central US, Southeast Asia, UK South, West Europe, and West US.
+### Resource group
-Enter the following information:
+Some methods require you to specify a resource group in the Azure subscription where the application is created. You can create a new resource group or use an existing one.
-| Field | Description |
-| -- | -- |
-| Subscription | The Azure subscription you want to use. |
-| Resource group | The resource group you want to use. You can create a new resource group or use an existing one. |
-| Resource name | A valid Azure resource name. |
-| Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. |
-| Template | The application template you want to use. For a blank application template, select **Custom application**.|
-| Region | The Azure region you want to use. |
-| Pricing plan | The pricing plan you want to use. |
+## Create an application
+
+# [Azure portal](#tab/azure-portal)
+
+The easiest way to get started creating IoT Central applications is in the [Azure portal](https://portal.azure.com/#create/Microsoft.IoTCentral).
:::image type="content" source="media/howto-create-iot-central-application/create-app-portal.png" alt-text="Screenshot that shows the create application experience in the Azure portal.":::
When the app is ready, you can navigate to it from the Azure portal:
:::image type="content" source="media/howto-create-iot-central-application/view-app-portal.png" alt-text="Screenshot that shows the IoT Central application resource in the Azure portal. The application URL is highlighted.":::
-To list all the IoT Central apps you've created, navigate to [IoT Central Applications](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.IoTCentral%2FIoTApps).
+To list all the IoT Central apps in your subscription, navigate to [IoT Central Applications](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.IoTCentral%2FIoTApps).
+
+# [Azure CLI](#tab/azure-cli)
+
+If you haven't already installed the extension, run the following command to install it:
+
+```azurecli
+az extension add --name azure-iot
+```
+
+Use the [az iot central app create](/cli/azure/iot/central/app#az-iot-central-app-create) command to create an IoT Central application in your Azure subscription. For example, to create a custom application in the _MyIoTCentralResourceGroup_ resource group:
+
+```azurecli
+# Create a resource group for the IoT Central application
+az group create --location "East US" \
+ --name "MyIoTCentralResourceGroup"
+
+# Create an IoT Central application
+az iot central app create \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --name "myiotcentralapp" --subdomain "mysubdomain" \
+ --sku ST1 --template "iotc-pnp-preview" \
+ --display-name "My Custom Display Name"
+```
+
+To list all the IoT Central apps in your subscription, run the following command:
+
+```azurecli
+az iot central app list
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+If you haven't already installed the PowerShell module, run the following command to install it:
+
+```powershell
+Install-Module Az.IotCentral
+```
+
+Use the [New-AzIotCentralApp](/powershell/module/az.iotcentral/New-AzIotCentralApp) cmdlet to create an IoT Central application in your Azure subscription. For example, to create a custom application in the _MyIoTCentralResourceGroup_ resource group:
+
+```powershell
+# Create a resource group for the IoT Central application
+New-AzResourceGroup -Location "East US" `
+ -Name "MyIoTCentralResourceGroup"
+
+# Create an IoT Central application
+New-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -Name "myiotcentralapp" -Subdomain "mysubdomain" `
+ -Sku "ST1" -Template "iotc-pnp-preview" `
+ -DisplayName "My Custom Display Name"
+```
+
+To list all the IoT Central apps in your subscription, run the following command:
+
+```powershell
+Get-AzIotCentralApp
+```
++ To list all the IoT Central applications you have access to, navigate to [IoT Central Applications](https://apps.azureiotcentral.com/myapps). ## Copy an application
-You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for.
+You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you're billed for:
-Navigate to **Application > Management** and select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Options](#options).
+1. Sign in to the application you want to copy.
+1. Navigate to **Application > Management** and select **Copy**.
+1. In the dialog box, enter the details for the new application.
+1. Select **Copy** to confirm that you want to continue.
:::image type="content" source="media/howto-create-iot-central-application/app-copy.png" alt-text="Screenshot that shows the copy application settings page." lightbox="media/howto-create-iot-central-application/app-copy.png"::: After the application copy operation succeeds, you can navigate to the new application using the link.
-Copying an application also copies the definition of rules and email action. Some actions, such as Flow and Logic Apps, are tied to specific rules by the rule ID. When a rule is copied to a different application, it gets its own rule ID. In this case, users must create a new action and then associate the new rule with it. In general, it's a good idea to check the rules and actions to make sure they're up-to-date in the new application.
+Be aware of the following issues in the new application:
-> [!WARNING]
-> If a dashboard includes tiles that display information about specific devices, then those tiles show **The requested resource was not found** in the new application. You must reconfigure these tiles to display information about devices in your new application.
+- Copying an application also copies the definition of rules and email actions. Some actions, such as _Flow and Logic Apps_, are tied to specific rules by the rule ID. When a rule is copied to a different application, it gets its own rule ID. In this case, users must create a new action and then associate the new rule with it. In general, it's a good idea to check the rules and actions to make sure they're up-to-date in the new application.
+
+- If a dashboard includes tiles that display information about specific devices, then those tiles show **The requested resource was not found** in the new application. You must reconfigure these tiles to display information about devices in your new application.
## Create and use a custom application template When you create an Azure IoT Central application, you choose from the built-in sample templates. You can also create your own application templates from existing IoT Central applications. You can then use your own application templates when you create new applications.
+### What's in your application template?
+ When you create an application template, it includes the following items from your existing application: -- The default application dashboard, including the dashboard layout and all the tiles you've defined.-- Device templates, including measurements, settings, properties, commands, and dashboard.-- Rules. All rule definitions are included. However actions, except for email actions, aren't included.
+- The default application dashboard, including the dashboard layout and all the tiles you defined.
+- Device templates, including measurements, settings, properties, commands, and views.
+- All rule definitions are included. However actions, except for email actions, aren't included.
- Device groups, including their queries. > [!WARNING]
When you create an application template, it doesn't include the following items:
Add these items manually to any applications created from an application template.
+### Create an application template
+ To create an application template from an existing IoT Central application:
-1. Go to the **Application** section in your application.
+1. Navigate to the **Application** section in your application.
1. Select **Template Export**. 1. On the **Template Export** page, enter a name and description for your template. 1. Select the **Export** button to create the application template. You can now copy the **Shareable Link** that enables someone to create a new application from the template:
If you delete an application template, you can no longer use the previously gene
To update your application template, change the template name or description on the **Application Template Export** page. Then select the **Export** button again. This action generates a new **Shareable link** and invalidates any previous **Shareable link** URL.
-## Other approaches
-
-You can also use the following approaches to create an IoT Central application:
--- [Create an IoT Central application using the command line](howto-manage-iot-central-from-cli.md#create-an-application)-- [Create an IoT Central application programmatically](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/)-
-## Next steps
+## Next step
-Now that you've learned how to manage Azure IoT Central applications from Azure CLI, here's the suggested next step:
+Now that you've learned how to create Azure IoT Central applications, here's the suggested next step:
> [!div class="nextstepaction"]
-> [Administer your application](howto-administer.md)
+> [Manage and monitor IoT Central applications](howto-manage-and-monitor-iot-central.md)
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
When your pipeline job completes successfully, sign in to your production IoT Ce
Now that you have a working pipeline you can manage your IoT Central instances directly by using configuration changes. You can upload new device templates into the *Device Models* folder and make changes directly to the configuration file. This approach lets you treat your IoT Central application's configuration the same as any other code.
-## Next steps
+## Next step
-Now that you know how to integrate IoT Central configurations into your CI/CD pipelines, a suggested next step is to learn how to [Manage and monitor IoT Central from the Azure portal](howto-manage-iot-central-from-portal.md).
+Now that you know how to integrate IoT Central configurations into your CI/CD pipelines, a suggested next step is to learn how to [Manage and monitor IoT Central applications](howto-manage-and-monitor-iot-central.md).
iot-central Howto Manage And Monitor Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-and-monitor-iot-central.md
+
+ Title: Manage and monitor IoT Central
+description: This article describes how to create, manage, and monitor your IoT Central applications and enable managed identities.
++++ Last updated : 04/02/2024++
+#customer intent: As an administrator, I want to learn how to manage and monitor IoT Central applications using Azure portal, Azure CLI, and Azure PowerShell so that I can maintain my set of IoT Central applications.
+++
+# Manage and monitor IoT Central applications
+
+You can use the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/), or [Azure PowerShell](/powershell/azure/) to manage and monitor IoT Central applications.
+
+If you prefer to use a language such as JavaScript, Python, C#, Ruby, or Go to create, update, list, and delete Azure IoT Central applications, see the [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository.
+
+To learn how to create an IoT Central application, see [Create an IoT Central application](howto-create-iot-central-application.md).
+
+## View applications
+
+# [Azure portal](#tab/azure-portal)
+
+To list all the IoT Central apps in your subscription, navigate to [IoT Central applications](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.IoTCentral%2FIoTApps).
+
+# [Azure CLI](#tab/azure-cli)
+
+Use the [az iot central app list](/cli/azure/iot/central/app#az-iot-central-app-list) command to list your IoT Central applications and view metadata.
+
+# [PowerShell](#tab/azure-powershell)
+
+Use the [Get-AzIotCentralApp](/powershell/module/az.iotcentral/Get-AzIotCentralApp) cmdlet to list your IoT Central applications and view metadata.
+++
+## Delete an application
+
+# [Azure portal](#tab/azure-portal)
+
+To delete an IoT Central application in the Azure portal, navigate to the **Overview** page of the application in the portal and select **Delete**.
+
+# [Azure CLI](#tab/azure-cli)
+
+Use the [az iot central app delete](/cli/azure/iot/central/app#az-iot-central-app-delete) command to delete an IoT Central application.
+
+# [PowerShell](#tab/azure-powershell)
+
+Use the [Remove-AzIotCentralApp](/powershell/module/az.iotcentral/remove-aziotcentralapp) cmdlet to delete an IoT Central application.
+++
+## Manage networking
+
+You can use private IP addresses from a virtual network address space when you manage your devices in IoT Central application to eliminate exposure on the public internet. To learn more, see [Create and configure a private endpoint for IoT Central](../core/howto-create-private-endpoint.md).
+
+## Configure a managed identity
+
+When you configure a data export in your IoT Central application, you can choose to configure the connection to the destination with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). Managed identities are more secure because:
+
+* You don't store the credentials for your resource in a connection string in your IoT Central application.
+* The credentials are automatically tied to the lifetime of your IoT Central application.
+* Managed identities automatically rotate their security keys regularly.
+
+IoT Central currently uses [system-assigned managed identities](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). To create the managed identity for your application, you use either the Azure portal or the REST API.
+
+When you configure a managed identity, the configuration includes a *scope* and a *role*:
+
+* The scope defines where you can use the managed identity. For example, you can use an Azure resource group as the scope. In this case, both the IoT Central application and the destination must be in the same resource group.
+* The role defines what permissions the IoT Central application is granted in the destination service. For example, for an IoT Central application to send data to an event hub, the managed identity needs the **Azure Event Hubs Data Sender** role assignment.
+
+# [Azure portal](#tab/azure-portal)
++
+# [Azure CLI](#tab/azure-cli)
+
+You can enable the managed identity when you create an IoT Central application:
+
+```azurecli
+# Create an IoT Central application with a managed identity
+az iot central app create \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --name "myiotcentralapp" --subdomain "mysubdomain" \
+ --sku ST1 --template "iotc-pnp-preview" \
+ --display-name "My Custom Display Name" \
+ --mi-system-assigned
+```
+
+Alternatively, you can enable a managed identity on an existing IoT Central application:
+
+```azurecli
+# Enable a system-assigned managed identity
+az iot central app identity assign --name "myiotcentralapp" \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --system-assigned
+```
+
+After you enable the managed identity, you can use the CLI to configure the role assignments.
+
+Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to create a role assignment. For example, the following commands first retrieve the principal ID of the managed identity. The second command assigns the `Azure Event Hubs Data Sender` role to the principal ID in the scope of the `MyIoTCentralResourceGroup` resource group:
+
+```azurecli
+scope=$(az group show -n "MyIoTCentralResourceGroup" --query "id" --output tsv)
+spID=$(az iot central app identity show \
+ --name "myiotcentralapp" \
+ --resource-group "MyIoTCentralResourceGroup" \
+ --query "principalId" --output tsv)
+az role assignment create --assignee $spID --role "Azure Event Hubs Data Sender" \
+ --scope $scope
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+You can enable the managed identity when you create an IoT Central application:
+
+```powershell
+# Create an IoT Central application with a managed identity
+New-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -Name "myiotcentralapp" -Subdomain "mysubdomain" `
+ -Sku "ST1" -Template "iotc-pnp-preview" `
+ -DisplayName "My Custom Display Name" -Identity "SystemAssigned"
+```
+
+Alternatively, you can enable a managed identity on an existing IoT Central application:
+
+```powershell
+# Enable a system-assigned managed identity
+Set-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
+ -Name "myiotcentralapp" -Identity "SystemAssigned"
+```
+
+After you enable the managed identity, you can use PowerShell to configure the role assignments.
+
+Use the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) cmdlet to create a role assignment. For example, the following commands first retrieve the principal ID of the managed identity. The second command assigns the `Azure Event Hubs Data Sender` role to the principal ID in the scope of the `MyIoTCentralResourceGroup` resource group:
+
+```powershell
+$resourceGroup = Get-AzResourceGroup -Name "MyIoTCentralResourceGroup"
+$app = Get-AzIotCentralApp -ResourceGroupName $resourceGroup.ResourceGroupName -Name "myiotcentralapp"
+$sp = Get-AzADServicePrincipal -ObjectId $app.Identity.PrincipalId
+New-AzRoleAssignment -RoleDefinitionName "Azure Event Hubs Data Sender" `
+ -ObjectId $sp.Id -Scope $resourceGroup.ResourceId
+```
+++
+To learn more about the role assignments, see:
+
+* [Built-in roles for Azure Event Hubs](../../event-hubs/authenticate-application.md#built-in-roles-for-azure-event-hubs)
+* [Built-in roles for Azure Service Bus](../../service-bus-messaging/authenticate-application.md#azure-built-in-roles-for-azure-service-bus)
+* [Built-in roles for Azure Storage Services](../../role-based-access-control/built-in-roles.md#storage)
+
+## Monitor application health
+
+You can use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
+
+> [!NOTE]
+> IoT Central applications also have an internal [audit log](howto-use-audit-logs.md) to track activity within the application.
+
+Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
+
+[Azure role based access control](../../role-based-access-control/overview.md) manages access to metrics in the Azure portal. Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
+
+### View metrics in the Azure portal
+
+The following example **Metrics** page shows a plot of the number of devices connected to your IoT Central application. For a list of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
+
+To view IoT Central metrics in the portal:
+
+1. Navigate to your IoT Central application resource in the portal. By default, IoT Central resources are located in a resource group called **IOTC**.
+1. To create a chart from your application's metrics, select **Metrics** in the **Monitoring** section.
++
+### Export logs and metrics
+
+Use the **Diagnostics settings** page to configure exporting metrics and logs to different destinations. To learn more, see [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md).
+
+### Analyze logs and metrics
+
+Use the **Workbooks** page to analyze logs and create visual reports. To learn more, see [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+
+### Metrics and invoices
+
+Metrics might differ from the numbers shown on your Azure IoT Central invoice. This situation occurs for reasons such as:
+
+* IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics.
+
+* IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. You can validate your device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
+
+* While metrics might show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).
+
+## Monitor connected IoT Edge devices
+
+If your application uses IoT Edge devices, you can monitor the health of your IoT Edge devices and modules using Azure Monitor. To learn more, see [Collect and transport Azure IoT Edge metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
- Title: Manage IoT Central from Azure CLI or PowerShell
-description: How to create and manage your IoT Central application using the Azure CLI or PowerShell and configure a managed system identity for secure data export.
---- Previously updated : 06/14/2023----
-# Manage IoT Central from Azure CLI or PowerShell
-
-Instead of creating and managing IoT Central applications in the [Azure portal](https://portal.azure.com/#create/Microsoft.IoTCentral), you can use [Azure CLI](/cli/azure/) or [Azure PowerShell](/powershell/azure/) to manage your applications.
-
-If you prefer to use a language such as JavaScript, Python, C#, Ruby, or Go to create, update, list, and delete Azure IoT Central applications, see the [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository.
-
-## Prerequisites
-
-# [Azure CLI](#tab/azure-cli)
--
-# [PowerShell](#tab/azure-powershell)
--
-> [!TIP]
-> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps#change-the-active-subscription).
-
-Run the following command to check the [IoT Central module](/powershell/module/az.iotcentral/) is installed in your PowerShell environment:
-
-```powershell
-Get-InstalledModule -name Az.I*
-```
-
-If the list of installed modules doesn't include **Az.IotCentral**, run the following command:
-
-```powershell
-Install-Module Az.IotCentral
-```
----
-## Create an application
-
-# [Azure CLI](#tab/azure-cli)
-
-Use the [az iot central app create](/cli/azure/iot/central/app#az-iot-central-app-create) command to create an IoT Central application in your Azure subscription. For example:
-
-```Azure CLI
-# Create a resource group for the IoT Central application
-az group create --location "East US" \
- --name "MyIoTCentralResourceGroup"
-```
-
-```azurecli
-# Create an IoT Central application
-az iot central app create \
- --resource-group "MyIoTCentralResourceGroup" \
- --name "myiotcentralapp" --subdomain "mysubdomain" \
- --sku ST1 --template "iotc-pnp-preview" \
- --display-name "My Custom Display Name"
-```
-
-These commands first create a resource group in the east US region for the application. The following table describes the parameters used with the **az iot central app create** command:
-
-| Parameter | Description |
-| -- | -- |
-| resource-group | The resource group that contains the application. This resource group must already exist in your subscription. |
-| location | By default, this command uses the location from the resource group. Currently, you can create an IoT Central application in the **Australia East**, **Canada Central**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **South Central US**, **Southeast Asia**, **UK South**, **West Europe**, and **West US**. |
-| name | The name of the application in the Azure portal. Avoid special characters - instead, use lower case letters (a-z), numbers (0-9), and dashes (-).|
-| subdomain | The subdomain in the URL of the application. In the example, the application URL is `https://mysubdomain.azureiotcentral.com`. |
-| sku | Currently, you can use either **ST1** or **ST2**. See [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). |
-| template | The application template to use. For more information, see the following table. |
-| display-name | The name of the application as displayed in the UI. |
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the [New-AzIotCentralApp](/powershell/module/az.iotcentral/New-AzIotCentralApp) cmdlet to create an IoT Central application in your Azure subscription. For example:
-
-```powershell
-# Create a resource group for the IoT Central application
-New-AzResourceGroup -ResourceGroupName "MyIoTCentralResourceGroup" `
- -Location "East US"
-```
-
-```powershell
-# Create an IoT Central application
-New-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
- -Name "myiotcentralapp" -Subdomain "mysubdomain" `
- -Sku "ST1" -Template "iotc-pnp-preview" `
- -DisplayName "My Custom Display Name"
-```
-
-The script first creates a resource group in the east US region for the application. The following table describes the parameters used with the **New-AzIotCentralApp** command:
-
-|Parameter |Description |
-|||
-|ResourceGroupName |The resource group that contains the application. This resource group must already exist in your subscription. |
-|Location |By default, this cmdlet uses the location from the resource group. Currently, you can create an IoT Central application in the **Australia East**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **Southeast Asia**, **UK South**, **West Europe** and **West US** regions. |
-|Name |The name of the application in the Azure portal. Avoid special characters - instead, use lower case letters (a-z), numbers (0-9), and dashes (-). |
-|Subdomain |The subdomain in the URL of the application. In the example, the application URL is `https://mysubdomain.azureiotcentral.com`. |
-|Sku |Currently, you can use either **ST1** or **ST2**. See [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). |
-|Template | The application template to use. For more information, see the following table. |
-|DisplayName |The name of the application as displayed in the UI. |
---
-### Application templates
--
-If you've created your own application template, you can use it to create a new application. When asked for an application template, enter the app ID shown in the exported app's URL shareable link under the [Application template export](howto-create-iot-central-application.md#create-and-use-a-custom-application-template) section of your app.
-
-## View applications
-
-# [Azure CLI](#tab/azure-cli)
-
-Use the [az iot central app list](/cli/azure/iot/central/app#az-iot-central-app-list) command to list your IoT Central applications and view metadata.
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the [Get-AzIotCentralApp](/powershell/module/az.iotcentral/Get-AzIotCentralApp) cmdlet to list your IoT Central applications and view metadata.
---
-## Modify an application
-
-# [Azure CLI](#tab/azure-cli)
-
-Use the [az iot central app update](/cli/azure/iot/central/app#az-iot-central-app-update) command to update the metadata of an IoT Central application. For example, to change the display name of your application:
-
-```azurecli
-az iot central app update --name myiotcentralapp \
- --resource-group MyIoTCentralResourceGroup \
- --set displayName="My new display name"
-```
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the [Set-AzIotCentralApp](/powershell/module/az.iotcentral/set-aziotcentralapp) cmdlet to update the metadata of an IoT Central application. For example, to change the display name of your application:
-
-```powershell
-Set-AzIotCentralApp -Name "myiotcentralapp" `
- -ResourceGroupName "MyIoTCentralResourceGroup" `
- -DisplayName "My new display name"
-```
---
-## Delete an application
-
-# [Azure CLI](#tab/azure-cli)
-
-Use the [az iot central app delete](/cli/azure/iot/central/app#az-iot-central-app-delete) command to delete an IoT Central application. For example:
-
-```azurecli
-az iot central app delete --name myiotcentralapp \
- --resource-group MyIoTCentralResourceGroup
-```
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the [Remove-AzIotCentralApp](/powershell/module/az.iotcentral/Remove-AzIotCentralApp) cmdlet to delete an IoT Central application. For example:
-
-```powershell
-Remove-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
- -Name "myiotcentralapp"
-```
---
-## Configure a managed identity
-
-An IoT Central application can use a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connection to a [data export destination](howto-export-to-blob-storage.md#connection-options).
-
-To enable the managed identity, use either the [Azure portal - Configure a managed identity](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) or the CLI. You can enable the managed identity when you create an IoT Central application:
-
-```azurecli
-# Create an IoT Central application with a managed identity
-az iot central app create \
- --resource-group "MyIoTCentralResourceGroup" \
- --name "myiotcentralapp" --subdomain "mysubdomain" \
- --sku ST1 --template "iotc-pnp-preview" \
- --display-name "My Custom Display Name" \
- --mi-system-assigned
-```
-
-Alternatively, you can enable a managed identity on an existing IoT Central application:
-
-```azurecli
-# Enable a system-assigned managed identity
-az iot central app identity assign --name "myiotcentralapp" \
- --resource-group "MyIoTCentralResourceGroup" \
- --system-assigned
-```
-
-After you enable the managed identity, you can use the CLI to configure the role assignments.
-
-Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to create a role assignment. For example, the following commands first retrieve the principal ID of the managed identity. The second command assigns the `Azure Event Hubs Data Sender` role to the principal ID in the scope of the `MyIoTCentralResourceGroup` resource group:
-
-```azurecli
-scope=$(az group show -n "MyIoTCentralResourceGroup" --query "id" --output tsv)
-spID=$(az iot central app identity show \
- --name "myiotcentralapp" \
- --resource-group "MyIoTCentralResourceGroup" \
- --query "principalId" --output tsv)
-az role assignment create --assignee $spID --role "Azure Event Hubs Data Sender" \
- --scope $scope
-```
-
-To learn more about the role assignments, see:
--- [Built-in roles for Azure Event Hubs](../../event-hubs/authenticate-application.md#built-in-roles-for-azure-event-hubs)-- [Built-in roles for Azure Service Bus](../../service-bus-messaging/authenticate-application.md#azure-built-in-roles-for-azure-service-bus)-- [Built-in roles for Azure Storage Services](/rest/api/storageservices/authorize-with-azure-active-directory#manage-access-rights-with-rbac)-
-## Next steps
-
-Now that you've learned how to manage Azure IoT Central applications from Azure CLI or PowerShell, here's the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Administer your application](howto-administer.md)
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
- Title: Manage and monitor IoT Central in the Azure portal
-description: This article describes how to create, manage, and monitor your IoT Central applications and enable managed identities from the Azure portal.
---- Previously updated : 07/14/2023---
-# Manage and monitor IoT Central from the Azure portal
-
-You can use the [Azure portal](https://portal.azure.com) to create, manage, and monitor IoT Central applications.
-
-To learn how to create an IoT Central application, see [Create an IoT Central application](howto-create-iot-central-application.md).
-
-## Manage existing IoT Central applications
-
-If you already have an Azure IoT Central application, you can delete it, or move it to a different subscription or resource group in the Azure portal.
-
-To get started, search for your application in the search bar at the top of the Azure portal. You can also view all your applications by searching for _IoT Central Applications_ and selecting the service:
--
-When you select an application in the search results, the Azure portal shows you its overview. You can navigate to the application by selecting the **IoT Central Application URL**:
--
-> [!NOTE]
-> Use the **IoT Central Application URL** to access the application for the first time.
-
-To move the application to a different resource group, select **move** beside **Resource group**. On the **Move resources** page, choose the resource group you'd like to move this application to.
-
-To move the application to a different subscription, select **move** beside **Subscription**. On the **Move resources** page, choose the subscription you'd like to move this application to:
--
-## Manage networking
-
-You can use private IP addresses from a virtual network address space to manage your devices in IoT Central application to eliminate exposure on the public internet. To learn more, see [Create and configure a private endpoint for IoT Central](../core/howto-create-private-endpoint.md)
-
-## Configure a managed identity
-
-When you configure a data export in your IoT Central application, you can choose to configure the connection to the destination with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). Managed identities are more secure because:
-
-* You don't store the credentials for your resource in a connection string in your IoT Central application.
-* The credentials are automatically tied to the lifetime of your IoT Central application.
-* Managed identities automatically rotate their security keys regularly.
-
-IoT Central currently uses [system-assigned managed identities](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). To create the managed identity for your application, you use either the Azure portal or the REST API.
-
-> [!NOTE]
-> You can only add a managed identity to an IoT Central application that was created in a region. All new applications are created in a region.
-
-When you configure a managed identity, the configuration includes a *scope* and a *role*:
-
-* The scope defines where you can use the managed identity. For example, you can use an Azure resource group as the scope. In this case, both the IoT Central application and the destination must be in the same resource group.
-* The role defines what permissions the IoT Central application is granted in the destination service. For example, for an IoT Central application to send data to an event hub, the managed identity needs the **Azure Event Hubs Data Sender** role assignment.
--
-You can configure role assignments in the Azure portal or use the Azure CLI:
-
-* To learn more about to configure role assignments in the Azure portal for specific destinations, see [Export IoT data to cloud destinations using blob storage](howto-export-to-blob-storage.md).
-* To learn more about how to configure role assignments using the Azure CLI, see [Manage IoT Central from Azure CLI or PowerShell](howto-manage-iot-central-from-cli.md).
-
-## Monitor application health
-
-You can use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
-
-> [!NOTE]
-> IoT Central applications have an internal [audit log](howto-use-audit-logs.md) to track activity within the application.
-
-Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
-
-Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
-
-### View metrics in the Azure portal
-
-The following example **Metrics** page shows a plot of the number of devices connected to your IoT Central application. For a list of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
-
-To view IoT Central metrics in the portal:
-
-1. Navigate to your IoT Central application resource in the portal. By default, IoT Central resources are located in a resource group called **IOTC**.
-1. To create a chart from your application's metrics, select **Metrics** in the **Monitoring** section.
--
-### Export logs and metrics
-
-Use the **Diagnostics settings** page to configure exporting metrics and logs to different destinations. To learn more, see [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md).
-
-### Analyze logs and metrics
-
-Use the **Workbooks** page to analyze logs and create visual reports. To learn more, see [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
-
-### Metrics and invoices
-
-Metrics may differ from the numbers shown on your Azure IoT Central invoice. This situation occurs for reasons such as:
-
-* IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics.
-
-* IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. You may choose to validate your device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
-
-* While metrics may show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).
-
-## Monitor connected IoT Edge devices
-
-To learn how to remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
-
-## Next steps
-
-Now that you've learned how to manage and monitor Azure IoT Central applications from the Azure portal, here's the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Administer your application](howto-administer.md)
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
You have several options to create device templates:
- When the device connects to IoT Central, have it send the model ID of the model it implements. IoT Central uses the model ID to retrieve the model from the model repository and to create a device template. Add any cloud properties and views your IoT Central application needs to the device template. - When the device connects to IoT Central, let IoT Central [autogenerate a device template](#autogenerate-a-device-template) definition from the data the device sends. - Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) and [IoT Central DTDL extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). Manually import the device model into your IoT Central application. Then add the cloud properties and views your IoT Central application needs.-- You can also add device templates to an IoT Central application using the [How to use the IoT Central REST API to manage device templates](howto-manage-device-templates-with-rest-api.md) or the [CLI](howto-manage-iot-central-from-cli.md).
+- You can also add device templates to an IoT Central application using the [How to use the IoT Central REST API to manage device templates](howto-manage-device-templates-with-rest-api.md).
> [!NOTE] > In each case, the device code must implement the capabilities defined in the model. The device code implementation isn't affected by the cloud properties and views sections of the device template.
iot-central Howto Use Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-audit-logs.md
The following screenshot shows the audit log view with the location of the sorti
:::image type="content" source="media/howto-use-audit-logs/audit-log.png" alt-text="Screenshot that shows the audit log. The location of the sort and filter controls is highlighted." lightbox="media/howto-use-audit-logs/audit-log.png"::: > [!TIP]
-> If you want to monitor the health of your connected devices, use Azure Monitor. To learn more, see [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
+> If you want to monitor the health of your connected devices, use Azure Monitor. To learn more, see [Monitor application health](howto-manage-and-monitor-iot-central.md#monitor-application-health).
## Customize the log
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
An administrator can use IoT Central metrics to assess the health of connected d
To view the metrics, an administrator can use charts in the Azure portal, a REST API, or PowerShell or Azure CLI queries.
-To learn more, see [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
+To learn more, see [Monitor application health](howto-manage-and-monitor-iot-central.md#monitor-application-health).
## Monitor connected IoT Edge devices
To learn how to monitor your IoT Edge fleet remotely by using Azure Monitor and
Many of the tools you use as an administrator are available in the **Security** and **Settings** sections of each IoT Central application. You can also use the following tools to complete some administrative tasks: -- [Azure Command-Line Interface (CLI) or PowerShell](howto-manage-iot-central-from-cli.md)-- [Azure portal](howto-manage-iot-central-from-portal.md)
+- [Azure Command-Line Interface (CLI) or PowerShell](howto-manage-and-monitor-iot-central.md)
+- [Azure portal](howto-manage-and-monitor-iot-central.md)
## Next steps
iot-central Overview Iot Central Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-security.md
Managed identities are more secure because:
To learn more, see: - [Export IoT data to cloud destinations using blob storage](howto-export-to-blob-storage.md)-- [Configure a managed identity in the Azure portal](howto-manage-iot-central-from-portal.md#configure-a-managed-identity)-- [Configure a managed identity using the Azure CLI](howto-manage-iot-central-from-cli.md#configure-a-managed-identity)
+- [Configure a managed identity](howto-manage-and-monitor-iot-central.md#configure-a-managed-identity)
+ ## Connect to a destination on a secure virtual network
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
You use *data plane* REST APIs to access the entities in and the capabilities of
To learn more, see [Tutorial: Use the REST API to manage an Azure IoT Central application](tutorial-use-rest-api.md).
-You use the *control plane* to manage IoT Central-related resources in your Azure subscription. You can use the REST API, the Azure CLI, or Resource Manager templates for control plane operations. For example, you can use the Azure CLI to create an IoT Central application. To learn more, see [Manage IoT Central from Azure CLI](howto-manage-iot-central-from-cli.md).
+You use the *control plane* to manage IoT Central-related resources in your Azure subscription. You can use the REST API, the Azure CLI, or Resource Manager templates for control plane operations. For example, you can use the Azure CLI to create an IoT Central application. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
-## Next steps
+## Next step
-If you want to learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
+If you want to learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Use your smartphone as a device to send telemetry to an IoT Central application](./quick-deploy-iot-central.md).
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
The IoT Central documentation refers to four user roles that interact with an Io
- A _solution builder_ is responsible for [creating an application](quick-deploy-iot-central.md), [configuring rules and actions](quick-configure-rules.md), [defining integrations with other services](quick-export-data.md), and further customizing the application for operators and device developers. - An _operator_ [manages the devices](howto-manage-devices-individually.md) connected to the application.-- An _administrator_ is responsible for administrative tasks such as managing [user roles and permissions](howto-administer.md) within the application and [configuring managed identities](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) for securing connects to other services.
+- An _administrator_ is responsible for administrative tasks such as managing [user roles and permissions](howto-administer.md) within the application and [configuring managed identities](howto-manage-and-monitor-iot-central.md#configure-a-managed-identity) for securing connects to other services.
- A _device developer_ [creates the code that runs on a device](./tutorial-connect-device.md) or [IoT Edge module](concepts-iot-edge.md) connected to your application. ## Next steps
iot-develop About Getting Started Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-getting-started-device-development.md
Each quickstart shows how to set up a code sample and tools, run a temperature c
|Quickstart|Device SDK| |-|-|
-|[Send telemetry from a device to Azure IoT Hub (C)](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c)|[Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c)|
-|[Send telemetry from a device to Azure IoT Hub (C#)](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)|[Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp)|
-|[Send telemetry from a device to Azure IoT Hub (Node.js)](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)|[Azure IoT Node.js SDK](https://github.com/Azure/azure-iot-sdk-node)|
-|[Send telemetry from a device to Azure IoT Hub (Python)](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python)|[Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python)|
-|[Send telemetry from a device to Azure IoT Hub (Java)](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java)|[Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java)|
+|[Send telemetry from a device to Azure IoT Hub (C)](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c)|[Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c)|
+|[Send telemetry from a device to Azure IoT Hub (C#)](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-csharp)|[Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp)|
+|[Send telemetry from a device to Azure IoT Hub (Node.js)](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)|[Azure IoT Node.js SDK](https://github.com/Azure/azure-iot-sdk-node)|
+|[Send telemetry from a device to Azure IoT Hub (Python)](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-python)|[Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python)|
+|[Send telemetry from a device to Azure IoT Hub (Java)](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-java)|[Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java)|
## Quickstarts for embedded devices See the following articles to start using the Azure IoT embedded device SDKs to connect embedded, resource-constrained microcontroller unit (MCU) devices to Azure IoT. Examples of constrained MCU devices with compute and memory limitations, include sensors, and special purpose hardware modules or boards. The following quickstarts require you to have the listed MCU devices.
Each quickstart shows how to set up a code sample and tools, flash the device, a
|Quickstart|Device|Embedded device SDK| |-|-|-| |[Quickstart: Connect a Microchip ATSAME54-XPro Evaluation kit to IoT Hub](quickstart-devkit-microchip-atsame54-xpro-iot-hub.md)|Microchip ATSAME54-XPro|Azure RTOS middleware|
-|[Quickstart: Connect an ESPRESSIF ESP32-Azure IoT Kit to IoT Hub](quickstart-devkit-espressif-esp32-freertos-iot-hub.md)|ESPRESSIF ESP32|FreeRTOS middleware|
+|[Quickstart: Connect an ESPRESSIF ESP32-Azure IoT Kit to IoT Hub](../iot/tutorial-devkit-espressif-esp32-freertos-iot-hub.md)|ESPRESSIF ESP32|FreeRTOS middleware|
|[Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub](quickstart-devkit-stm-b-l475e-iot-hub.md)|STMicroelectronics L475E-IOT01A|Azure RTOS middleware| |[Quickstart: Connect an NXP MIMXRT1060-EVK Evaluation kit to IoT Hub](quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md)|NXP MIMXRT1060-EVK|Azure RTOS middleware| |[Connect an MXCHIP AZ3166 devkit to IoT Hub](quickstart-devkit-mxchip-az3166-iot-hub.md)|MXCHIP AZ3166|Azure RTOS middleware|
To learn more about working with the IoT device SDKs and developing for general
- [Build a device solution for IoT Hub](set-up-environment.md) To learn more about working with the IoT C SDK and embedded C SDK for embedded devices, see the following article.-- [C SDK and Embedded C SDK usage scenarios](concepts-using-c-sdk-and-embedded-c-sdk.md)
+- [C SDK and Embedded C SDK usage scenarios](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
iot-develop About Iot Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-develop.md
The current embedded SDKs target the **C** language. The embedded SDKs provide e
## Choosing your hardware Azure IoT devices are the basic building blocks of an IoT solution and are responsible for observing and interacting with their environment. There are many different types of IoT devices, and it's helpful to understand the kinds of devices that exist and how they can affect your development process.
-For more information on the difference between devices types covered in this article, see [About IoT Device Types](concepts-iot-device-types.md).
+For more information on the difference between devices types covered in this article, see [About IoT Device Types](../iot/concepts-iot-device-types.md).
## Choosing an SDK As an Azure IoT device developer, you have a diverse set of SDKs, protocols and tools to help build device-enabled cloud applications.
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md
The main consideration in choosing an SDK is the device's own hardware. General
|[Embedded device SDKs](#embedded-device-sdks)|Embedded devices|Special-purpose MCU-based devices with compute and memory limitations|Sensors| > [!Note]
-> For more information on different device categories so you can choose the best SDK for your device, see [Azure IoT Device Types](concepts-iot-device-types.md).
+> For more information on different device categories so you can choose the best SDK for your device, see [Azure IoT Device Types](../iot/concepts-iot-device-types.md).
## Device SDKs
iot-develop Quickstart Devkit Microchip Atsame54 Xpro Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a general simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"]
-> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a general simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"]
-> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
As a next step, explore the following articles to learn more about using the IoT
> [!div class="nextstepaction"] > [Connect an MXCHIP AZ3166 devkit to IoT Hub](quickstart-devkit-mxchip-az3166-iot-hub.md) > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"]
-> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a general simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"]
-> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Stm B L475e Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"] > [Connect an STMicroelectronics B-L475E-IOT01A to IoT Central](quickstart-devkit-stm-b-l475e.md)
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!IMPORTANT]
iot-develop Quickstart Devkit Stm B L4s5i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a general device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a general device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"] > [Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub](quickstart-devkit-stm-b-l475e-iot-hub.md) > [!div class="nextstepaction"]
-> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Stm B U585i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md
In this quickstart, you built a custom image that contains Azure RTOS sample cod
As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT. > [!div class="nextstepaction"]
-> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Connect a general simulated device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"] > [Quickstart: Connect an STMicroelectronics B-L4S5I-IOT01A Discovery kit to IoT Hub](quickstart-devkit-stm-b-l4s5i-iot-hub.md) > [!div class="nextstepaction"]
-> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](../iot/concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-hub Device Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-cli.md
In this article, you:
To learn how to:
-* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json).
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json).
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
iot-hub Device Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-dotnet.md
In this article, you:
To learn how to:
-* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp).
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp).
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
iot-hub Device Twins Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-java.md
In this article, you:
To learn how to:
-* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-java)
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-java)
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md)
iot-hub Device Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-node.md
In this article, you:
To learn how to:
-* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-nodejs)
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-nodejs)
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md)
iot-hub Device Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-python.md
In this article, you:
To learn how to:
-* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) article.
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-python) article.
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
iot-hub File Upload Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-dotnet.md
This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using an Azure IoT .NET device and service SDKs.
-The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-dotnet.md) article show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) article shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
+The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-dotnet.md) article show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) article shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
* Videos * Large files that contain images
iot-hub File Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-java.md
This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Java.
-The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-java) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-java.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure message routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
+The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-java) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-java.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure message routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
* Videos * Large files that contain images
iot-hub File Upload Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-node.md
This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Node.js.
-The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-node.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
+The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-node.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
* Videos * Large files that contain images
iot-hub File Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-python.md
This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Python.
-The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-python) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-python.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
+The [Send telemetry from a device to an IoT hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-python) quickstart and [Send cloud-to-device messages with IoT Hub](c2d-messaging-python.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
* Videos * Large files that contain images
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
For more information, see [Compare message routing and Event Grid for IoT Hub](i
To try out an end-to-end IoT solution, check out the IoT Hub quickstarts: - [Send telemetry from a device to IoT Hub](quickstart-send-telemetry-cli.md)-- [Send telemetry from an IoT Plug and Play device to IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
+- [Send telemetry from an IoT Plug and Play device to IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
- [Quickstart: Control a device connected to an IoT hub](quickstart-control-device.md) To learn more about the ways you can build and deploy IoT solutions with Azure IoT, visit:
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md
The **iothub-connection-auth-method** property contains a JSON serialized object
## Next steps * For information about message size limits in IoT Hub, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
-* To learn how to create and read IoT Hub messages in various programming languages, see the [Quickstarts](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json).
+* To learn how to create and read IoT Hub messages in various programming languages, see the [Quickstarts](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json).
* To learn about the structure of non-telemetry events generated by IoT Hub, see [IoT Hub non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md).
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
For more information, see [IoT Hub message routing query syntax](./iot-hub-devgu
Use the following articles to learn how to read messages from an endpoint.
-* Read from a [built-in endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
+* Read from a [built-in endpoint](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
* Read from [Blob storage](../storage/blobs/storage-blob-event-quickstart.md)
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
The tier also determines the throttling limits that IoT Hub enforces on all oper
Operation throttles are rate limitations that are applied in minute ranges and are intended to prevent abuse. They're also subject to [traffic shaping](#traffic-shaping).
-It's a good practice to throttle your calls so that you don't hit/exceed the throttling limits. If you do hit the limit, IoT Hub responds with error code 429 and the client should back-off and retry. These limits are per hub (or in some cases per hub/unit). For more information, see [Retry patterns](../iot-develop/concepts-manage-device-reconnections.md#retry-patterns).
+It's a good practice to throttle your calls so that you don't hit/exceed the throttling limits. If you do hit the limit, IoT Hub responds with error code 429 and the client should back-off and retry. These limits are per hub (or in some cases per hub/unit). For more information, see [Retry patterns](../iot/concepts-manage-device-reconnections.md#retry-patterns).
For pricing details about which operations are charged and under what circumstances, see [billing information](iot-hub-devguide-pricing.md).
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Azure IoT SDKs are also available for the following
## Next steps
-Learn how to [manage connectivity and reliable messaging](../iot-develop/concepts-manage-device-reconnections.md) using the IoT Hub device SDKs.
+Learn how to [manage connectivity and reliable messaging](../iot/concepts-manage-device-reconnections.md) using the IoT Hub device SDKs.
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
In this section, you edit the [iothub_ll_telemetry_sample.c](https://github.com/
:::code language="c" source="~/samples-iot-distributed-tracing/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" range="56-60" highlight="2":::
- Replace the value of the `connectionString` constant with the device connection string that you saved in the [Register a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json#register-a-device) section of the quickstart for sending telemetry.
+ Replace the value of the `connectionString` constant with the device connection string that you saved in the [Register a device](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json#register-a-device) section of the quickstart for sending telemetry.
1. Find the line of code that calls `IoTHubDeviceClient_LL_SetConnectionStatusCallback` to register a connection status callback function before the send message loop. Add code under that line to call `IoTHubDeviceClient_LL_EnablePolicyConfiguration` and enable distributed tracing for the device:
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md
Depending on the uptime goals you define for your IoT solutions, you should dete
## Intra-region HA
-The IoT Hub service provides intra-region HA by implementing redundancies in almost all layers of the service. The [SLA published by the IoT Hub service](https://azure.microsoft.com/support/legal/sl#retry-patterns) must be built in to the components interacting with a cloud application to deal with transient failures.
+The IoT Hub service provides intra-region HA by implementing redundancies in almost all layers of the service. The [SLA published by the IoT Hub service](https://azure.microsoft.com/support/legal/sl#retry-patterns) must be built in to the components interacting with a cloud application to deal with transient failures.
## Availability zones
Here's a summary of the HA/DR options presented in this article that can be used
## Next steps * [What is Azure IoT Hub?](about-iot-hub.md)
-* [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
+* [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
* [Tutorial: Perform manual failover for an IoT hub](tutorial-manual-failover.md)
iot-hub Iot Hub Live Data Visualization In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md
If you don't have an Azure subscription, [create a free account](https://azure.m
Before you begin this tutorial, have the following prerequisites in place:
-* Complete one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json) quickstarts in the development language of your choice. Alternatively, you can use any device app that sends temperature telemetry; for example, the [Raspberry Pi online simulator](raspberry-pi-get-started.md) or one of the [Embedded device](../iot-develop/quickstart-devkit-mxchip-az3166.md) quickstarts. These articles cover the following requirements:
+* Complete one of the [Send telemetry](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json) quickstarts in the development language of your choice. Alternatively, you can use any device app that sends temperature telemetry; for example, the [Raspberry Pi online simulator](raspberry-pi-get-started.md) or one of the [Embedded device](../iot-develop/quickstart-devkit-mxchip-az3166.md) quickstarts. These articles cover the following requirements:
* An active Azure subscription. * An Azure IoT hub in your subscription.
iot-hub Iot Hub Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-connectivity.md
If the previous steps didn't help, try:
* To learn more about resolving transient issues, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
-* To learn more about the Azure IoT device SDKs and managing retries, see [Retry patterns](../iot-develop/concepts-manage-device-reconnections.md#retry-patterns).
+* To learn more about the Azure IoT device SDKs and managing retries, see [Retry patterns](../iot/concepts-manage-device-reconnections.md#retry-patterns).
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
You can remove the Baltimore root certificate once all stages of the migration a
If you're experiencing general connectivity issues with IoT Hub, check out these troubleshooting resources:
-* [Connection and retry patterns with device SDKs](../iot-develop/concepts-manage-device-reconnections.md#connection-and-retry).
+* [Connection and retry patterns with device SDKs](../iot/concepts-manage-device-reconnections.md#connection-and-retry).
* [Understand and resolve Azure IoT Hub error codes](troubleshoot-error-codes.md). If you're watching Azure Monitor after migrating certificates, you should look for a DeviceDisconnect event followed by a DeviceConnect event, as demonstrated in the following screenshot:
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
To resolve this error:
* Use the latest versions of the [IoT SDKs](iot-hub-devguide-sdks.md). * See the guidance for [IoT Hub internal server errors](#500xxx-internal-errors).
-We recommend using Azure IoT device SDKs to manage connections reliably. To learn more, see [Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs](../iot-develop/concepts-manage-device-reconnections.md)
+We recommend using Azure IoT device SDKs to manage connections reliably. To learn more, see [Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs](../iot/concepts-manage-device-reconnections.md)
## 409001 Device already exists
You may see that your request to IoT Hub fails with an error that begins with 50
There can be many causes for a 500xxx error response. In all cases, the issue is most likely transient. While the IoT Hub team works hard to maintain [the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/), small subsets of IoT Hub nodes can occasionally experience transient faults. When your device tries to connect to a node that's having issues, you receive this error.
-To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](../iot-develop/concepts-manage-device-reconnections.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
+To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](../iot/concepts-manage-device-reconnections.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
If the problem persists, check [Resource Health](iot-hub-azure-service-health-integration.md#check-iot-hub-health-with-azure-resource-health) and [Azure Status](https://azure.status.microsoft/) to see if IoT Hub has a known problem. You can also use the [manual failover feature](tutorial-manual-failover.md).
iot-hub Tutorial Use Metrics And Diags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-use-metrics-and-diags.md
Use Azure Monitor to collect metrics and logs from your IoT hub to monitor the operation of your solution and troubleshoot problems when they occur. In this tutorial, you'll learn how to create charts based on metrics, how to create alerts that trigger on metrics, how to send IoT Hub operations and errors to Azure Monitor Logs, and how to check the logs for errors.
-This tutorial uses the Azure sample from the [.NET send telemetry quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp) to send messages to the IoT hub. You can always use a device or another sample to send messages, but you may have to modify a few steps accordingly.
+This tutorial uses the Azure sample from the [.NET send telemetry quickstart](../iot/tutorial-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json&pivots=programming-language-csharp) to send messages to the IoT hub. You can always use a device or another sample to send messages, but you may have to modify a few steps accordingly.
Some familiarity with Azure Monitor concepts might be helpful before you begin this tutorial. To learn more, see [Monitor IoT Hub](monitor-iot-hub.md). To learn more about the metrics and resource logs emitted by IoT Hub, see [Monitoring data reference](monitor-iot-hub-reference.md).
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
Title: Deploy extensions with Azure IoT Orchestrator
-description: Use the Azure portal, Azure CLI, or GitHub Actions to deploy Azure IoT Operations extensions with the Azure IoT Orchestrator
+description: Use the Azure CLI to deploy Azure IoT Operations extensions with the Azure IoT Orchestrator.
Previously updated : 01/31/2024 Last updated : 04/05/2024 #CustomerIntent: As an OT professional, I want to deploy Azure IoT Operations to a Kubernetes cluster.
Last updated 01/31/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-Deploy Azure IoT Operations Preview to a Kubernetes cluster using the Azure portal, Azure CLI, or GitHub actions. Once you have Azure IoT Operations deployed, then you can use the Azure IoT Orchestrator Preview service to manage and deploy additional workloads to your cluster.
+Deploy Azure IoT Operations Preview to a Kubernetes cluster using the Azure CLI. Once you have Azure IoT Operations deployed, then you can use the Azure IoT Orchestrator Preview service to manage and deploy other workloads to your cluster.
## Prerequisites
-Cloud resources:
+Cloud resources:
-* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An Azure subscription.
-* Azure access permissions. At a minimum, have **Contributor** permissions in your Azure subscription. Depending on the deployment method and feature flag status you select, you may also need **Microsoft/Authorization/roleAssignments/write** permissions. If you *don't* have role assignment write permissions, take the following additional steps when deploying:
+* Azure access permissions. At a minimum, have **Contributor** permissions in your Azure subscription. Depending on the deployment feature flag status you select, you might also need **Microsoft/Authorization/roleAssignments/write** permissions for the resource group that contains your Arc-enabled Kubernetes cluster. You can make a custom role in Azure role-based access control or assign a built-in role that grants this permission. For more information, see [Azure built-in roles for General](../../role-based-access-control/built-in-roles/general.md).
- * If deploying with an Azure Resource Manager template, set the `deployResourceSyncRules` parameter to `false`.
- * If deploying with the Azure CLI, include the `--disable-rsync-rules`.
+ If you *don't* have role assignment write permissions, you can still deploy Azure IoT Operations by disabling some features. This approach is discussed in more detail in the [Deploy extensions](#deploy-extensions) section of this article.
-* An [Azure Key Vault](../../key-vault/general/overview.md) that has the **Permission model** set to **Vault access policy**. You can check this setting in the **Access configuration** section of an existing key vault.
+ * In the Azure CLI, use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to give permissions. For example, `az role assignment create --assignee sp_name --role "Role Based Access Control Administrator" --scope subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyResourceGroup`
+
+ * In the Azure portal, you're prompted to restrict access using conditions when you assign privileged admin roles to a user or principal. For this scenario, select the **Allow user to assign all roles** condition in the **Add role assignment** page.
+
+ :::image type="content" source="./media/howto-deploy-iot-operations/add-role-assignment-conditions.png" alt-text="Screenshot that shows assigning users highly privileged role access in the Azure portal.":::
+
+* An Azure Key Vault that has the **Permission model** set to **Vault access policy**. You can check this setting in the **Access configuration** section of an existing key vault. If you need to create a new key vault, use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command:
+
+ ```azurecli
+ az keyvault create --enable-rbac-authorization false --name "<KEYVAULT_NAME>" --resource-group "<RESOURCE_GROUP>"
+ ```
Development resources:
Development resources:
* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
- ```bash
+ ```azurecli
az extension add --upgrade --name azure-iot-ops ``` A cluster host:
-* An Azure Arc-enabled Kubernetes cluster. If you don't have one, follow the steps in [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md?tabs=wsl-ubuntu).
+* An Azure Arc-enabled Kubernetes cluster. If you don't have one, follow the steps in [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md?tabs=wsl-ubuntu).
If you've already deployed Azure IoT Operations to your cluster, uninstall those resources before continuing. For more information, see [Update a deployment](#update-a-deployment).
A cluster host:
az iot ops verify-host ``` - ## Deploy extensions
-### Azure CLI
- Use the Azure CLI to deploy Azure IoT Operations components to your Arc-enabled Kubernetes cluster.
-Sign in to Azure CLI. To prevent potential permission issues later, sign in interactively with a browser here even if you already logged in before.
+1. Sign in to Azure CLI interactively with a browser even if you already signed in before. If you don't sign in interactively, you might get an error that says *Your device is required to be managed to access your resource* when you continue to the next step to deploy Azure IoT Operations.
-```azurecli-interactive
-az login
-```
+ ```azurecli-interactive
+ az login
+ ```
-> [!NOTE]
-> If you're using GitHub Codespaces in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either:
->
-> * Open the codespace in VS Code desktop, and then run `az login` in the terminal. This opens a browser window where you can log in to Azure.
-> * After you get the localhost error on the browser, copy the URL from the browser and use `curl <URL>` in a new terminal tab. You should see a JSON response with the message "You have logged into Microsoft Azure!".
+ > [!NOTE]
+ > If you're using GitHub Codespaces in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either:
+ >
+ > * Open the codespace in VS Code desktop, and then run `az login` in the terminal. This opens a browser window where you can log in to Azure.
+ > * Or, after you get the localhost error on the browser, copy the URL from the browser and use `curl <URL>` in a new terminal tab. You should see a JSON response with the message "You have logged into Microsoft Azure!".
-Deploy Azure IoT Operations to your cluster. The [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command does the following steps:
+1. Deploy Azure IoT Operations to your cluster. Use optional flags to customize the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command to fit your scenario.
-* Creates a key vault in your resource group.
-* Sets up a service principal to give your cluster access to the key vault.
-* Configures TLS certificates.
-* Configures a secrets store on your cluster that connects to the key vault.
-* Deploys the Azure IoT Operations resources.
+ By default, the `az iot ops init` command takes the following actions, some of which require that the principal signed in to the CLI has elevated permissions:
-```azurecli-interactive
-az iot ops init --cluster <CLUSTER_NAME> -g <RESOURCE_GROUP> --kv-id $(az keyvault create -n <NEW_KEYVAULT_NAME> -g <RESOURCE_GROUP> -o tsv --query id)
-```
+ * Set up a service principal and app registration to give your cluster access to the key vault.
+ * Configure TLS certificates.
+ * Configure a secrets store on your cluster that connects to the key vault.
+ * Deploy the Azure IoT Operations resources.
->[!TIP]
->If you get an error that says *Your device is required to be managed to access your resource*, go back to the previous step and make sure that you signed in interactively.
+ ```azurecli-interactive
+ az iot ops init --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --kv-id <KEYVAULT_ID>
+ ```
-If you don't have **Microsoft.Authorization/roleAssignment/write** permissions in your Azure subscription, include the `--disable-rsync-rules` feature flag.
+ If you don't have **Microsoft.Authorization/roleAssignment/write** permissions in the resource group, add the `--disable-rsync-rules` feature flag. This flag disables the resource sync rules on the deployment.
-If you encounter an issue with the KeyVault access policy and the Service Principal (SP) permissions, [pass service principal and KeyVault arguments](howto-manage-secrets.md#pass-service-principal-and-key-vault-arguments-to-azure-iot-operations-deployment).
+ If you want to use an existing service principal and app registration instead of allowing `init` to create new ones, include the `--sp-app-id,` `--sp-object-id`, and `--sp-secret` parameters. For more information, see [Configure service principal and Key Vault manually](howto-manage-secrets.md#configure-service-principal-and-key-vault-manually).
-Use optional flags to customize the `az iot ops init` command. To learn more, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
+1. After the deployment is complete, you can use [az iot ops check](/cli/azure/iot/ops#az-iot-ops-check) to evaluate IoT Operations service deployment for health, configuration, and usability. The *check* command can help you find problems in your deployment and configuration.
-> [!TIP]
-> You can check the configurations of topic maps, QoS, message routes with the [CLI extension](/cli/azure/iot/ops#az-iot-ops-check-examples) `az iot ops check --detail-level 2`.
+ ```azurecli
+ az iot ops check
+ ```
+
+ You can also check the configurations of topic maps, QoS, and message routes by adding the `--detail-level 2` parameter for a verbose view.
### Configure cluster network (AKS EE)
To view the pods on your cluster, run the following command:
kubectl get pods -n azure-iot-operations ```
-It can take several minutes for the deployment to complete. Continue running the `get pods` command to refresh your view.
+It can take several minutes for the deployment to complete. Rerun the `get pods` command to refresh your view.
To view your cluster on the Azure portal, use the following steps:
To view your cluster on the Azure portal, use the following steps:
## Update a deployment
-Currently, there is no support for updating an existing Azure IoT Operations deployment. Instead, start with a clean cluster for a new deployment.
+Currently, there's no support for updating an existing Azure IoT Operations deployment. Instead, start with a clean cluster for a new deployment.
If you want to delete the Azure IoT Operations deployment on your cluster so that you can redeploy to it, navigate to your cluster on the Azure portal. Select the extensions of the type **microsoft.iotoperations.x** and **microsoft.deviceregistry.assets**, then select **Uninstall**. Keep the secrets provider on your cluster, as that is a prerequisite for deployment and not included in a fresh deployment.
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
In this section, you use the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-ini
| **KEYVAULT_NAME** | A name for a new key vault. | ```azurecli
- az keyvault create --enable-rbac-authorization false --name "<KEYVAULT_NAME>" --resource-group "<RESOURCE_GROUP>"
+ az keyvault create --enable-rbac-authorization false --name $KEYVAULT_NAME --resource-group $RESOURCE_GROUP
``` >[!TIP]
iot Concepts Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-iot-device-selection.md
+
+ Title: Azure IOT prototyping device selection list
+description: This document provides guidance on choosing a hardware device for prototyping IoT Azure solutions.
++++ Last updated : 04/04/2024+
+# IoT device selection list
+
+This IoT device selection list aims to give partners a starting point with IoT hardware to build prototypes and proof-of-concepts quickly and easily.[^1]
+
+All boards listed support users of all experience levels.
+
+>[!NOTE]
+>This table is not intended to be an exhaustive list or for bringing solutions to production. [^2] [^3]
+
+**Security advisory:** Except for the Azure Sphere, it's recommended to keep these devices behind a router and/or firewall.
+
+[^1]: *If you're new to hardware programming, for MCU dev work we recommend using VS Code Arduino Extension or VS Code Platform IO Extension. For SBC dev work, you program the device like you would a laptop, that is, on the device itself. The Raspberry Pi supports VS Code development.*
+
+[^2]: *Devices in the availability of support resources, common boards used for prototyping and PoCs, and boards that support beginner-friendly IDEs like Arduino IDE and VS Code extensions; for example, Arduino Extension and Platform IO extension. For simplicity, we aimed to keep the total device list <6. Other teams and individuals may have chosen to feature different boards based on their interpretation of the criteria.*
+
+[^3]: *For bringing devices to production, you likely want to test a PoC with a specific chipset, ST's STM32 or Microchip's Pic-IoT breakout board series, design a custom board that can be manufactured for lower cost than the MCUs and SBCs listed here, or even explore FPGA-based dev kits. You may also want to use a development environment for professional electrical engineering like STM32CubeMX or ARM mBed browser-based programmer.*
+
+## Contents
+
+| Section | Description |
+|--|--|
+| [Start here](#start-here) | A guide to using this selection list. Includes suggested selection criteria.|
+| [Selection diagram](#application-selection-visual) | A visual that summarizes common selection criteria with possible hardware choices. |
+| [Terminology and ML requirements](#terminology-and-ml-requirements) | Terminology and acronym definitions and device requirements for edge machine learning (ML). |
+| [MCU device list](#mcu-device-list) | A list of recommended MCUs, for example, ESP32, with tech specs and alternatives. |
+| [SBC device list](#sbc-device-list) | A list of recommended SBCs, for example, Raspberry Pi, with tech specs and alternatives. |
+
+## Start here
+
+### How to use this document
+
+Use this document to better understand IoT terminology, device selection considerations, and to choose an IoT device for prototyping or building a proof-of-concept. We recommend the following procedure:
+
+1. Read through the 'what to consider when choosing a board' section to identify needs and constraints.
+
+2. Use the Application Selection Visual to identify possible options for your IoT scenario.
+
+3. Using the MCU or SBC Device Lists, check device specifications and compare against your needs/constraints.
+
+### What to consider when choosing a board
+
+To choose a device for your IoT prototype, see the following criteria:
+
+- **Microcontroller unit (MCU) or single board computer (SBC)**
+ - An MCU is preferred for single tasks, like gathering and uploading sensor data or machine learning at the edge. MCUs also tend to be lower cost.
+ - An SBC is preferred when you need multiple different tasks, like gathering sensor data and controlling another device. It may also be preferred in the early stages when there are many options for possible solutions - an SBC enables you to try lots of different approaches.
+
+- **Processing power**
+
+ - **Memory**: Consider how much memory storage (in bytes), file storage, and memory to run programs your project needs.
+
+ - **Clock speed**: Consider how quickly your programs need to run or how quickly you need the device to communicate with the IoT server.
+
+ - **End-of-life**: Consider if you need a device with the most up-to-date features and documentation or if you can use a discontinued device as a prototype.
+
+- **Power consumption**
+
+ - **Power**: Consider how much voltage and current the board consumes. Determine if wall power is readily available or if you need a battery for your application.
+
+ - **Connection**: Consider the physical connection to the power source. If you need battery power, check if there's a battery connection port available on the board. If there's no battery connector, seek another comparable board, or consider other ways to add battery power to your device.
+
+- **Inputs and outputs**
+ - **Ports and pins**: Consider how many and of what types of ports and I/O pins your project may require.
+ * Other considerations include if your device will be communicating with other sensors or devices. If so, identify how many ports those signals require.
+
+ - **Protocols**: If you're working with other sensors or devices, consider what hardware communication protocols are required.
+ * For example, you may need CAN, UART, SPI, I2C, or other communication protocols.
+ - **Power**: Consider if your device will be powering other components like sensors. If your device is powering other components, identify the voltage, and current output of the device's available power pins and determine what voltage/current your other components need.
+
+ - **Types**: Determine if you need to communicate with analog components. If you are in need of analog components, identify how many analog I/O pins your project needs.
+
+ - **Peripherals**: Consider if you prefer a device with onboard sensors or other features like a screen, microphone, etc.
+
+- **Development**
+
+ - **Programming language**: Consider if your project requires higher-level languages beyond C/C++. If so, identify the common programming languages for the application you need (for example, Machine Learning is often done in Python). Think about what SDKs, APIs, and/or libraries are helpful or necessary for your project. Identify what programming language(s) these are supported in.
+
+ - **IDE**: Consider the development environments that the device supports and if this meets the needs, skill set, and/or preferences of your developers.
+
+ - **Community**: Consider how much assistance you want/need in building a solution. For example, consider if you prefer to start with sample code, if you want troubleshooting advice or assistance, or if you would benefit from an active community that generates new samples and updates documentation.
+
+ - **Documentation**: Take a look at the device documentation. Identify if it's complete and easy to follow. Consider if you need schematics, samples, datasheets, or other types of documentation. If so, do some searching to see if those items are available for your project. Consider the software SDKs/APIs/libraries that are written for the board and if these items would make your prototyping process easier. Identify if this documentation is maintained and who the maintainers are.
+
+- **Security**
+
+ - **Networking**: Consider if your device is connected to an external network or if it can be kept behind a router and/or firewall. If your prototype needs to be connected to an externally facing network, we recommend using the Azure Sphere as it is the only reliably secure device.
+
+ - **Peripherals**: Consider if any of the peripherals your device connects to have wireless protocols (for example, WiFi, BLE).
+
+ - **Physical location**: Consider if your device or any of the peripherals it's connected to will be accessible to the public. If so, we recommend making the device physically inaccessible. For example, in a closed, locked box.
+
+## Application selection visual
+
+>[!NOTE]
+>This list is for educational purposes only, it is not intended to endorse any products.
+>
+
+## Terminology and ML requirements
+
+This section provides definitions for embedded terminology and acronyms and hardware specifications for visual, auditory, and sensor machine learning applications.
+
+### Terminology
+
+Terminology and acronyms are listed in alphabetical order.
+
+| Term | Definition |
+| - | |
+| ADC | Analog to digital converter; converts analog signals from connected components like sensors to digital signals that are readable by the device |
+| Analog pins | Used for connecting analog components that have continuous signals like photoresistors (light sensors) and microphones |
+| Clock speed | How quickly the CPU can retrieve and interpret instructions |
+| Digital pins | Used for connecting digital components that have binary signals like LEDs and switches |
+| Flash (or ROM) | Memory available for storing programs |
+| IDE | Integrated development environment; a program for writing software code |
+| IMU | Inertial measurement unit |
+| IO (or I/O) pins | Input/Output pins used for communicating with other devices like sensors and other controllers |
+| MCU | Microcontroller Unit; a small computer on a single chip that includes a CPU, RAM, and IO |
+| MPU | Microprocessor unit; a computer processor that incorporates the functions of a computer's central processing unit (CPU) on a single integrated circuit (IC), or at most a few integrated circuits. |
+| ML | Machine learning; special computer programs that do complex pattern recognition |
+| PWM | Pulse width modulation; a way to modify digital signals to achieve analog-like effects like changing brightness, volume, and speed |
+| RAM | Random access memory; how much memory is available to run programs |
+| SBC | Single board computer |
+| TF | TensorFlow; a machine learning software package designed for edge devices |
+| TF Lite | TensorFlow Lite; a smaller version of TF for small edge devices |
+
+### Machine learning hardware requirements
+
+#### Vision ML
+
+- Speed: 200 MHz
+- Flash: 300 kB
+- RAM: 100 kB
+
+#### Speech ML
+
+- Speed: 60 MHz [^4]
+- Flash: 50 kB
+- RAM: 8 kB
+
+#### Sensor ML (for example, motion, distance)
+
+- Speed: 20 MHz
+- Flash: 20 kB
+- RAM: 2 kB
+
+[^4]: *Speed requirement is largely due to the need for processors to be able to sample a minimum of 6 kHz for microphones to be able to process human vocal frequencies.*
+
+## MCU device list
+
+Following is a comparison table of MCUs in alphabetical order. The list isn't not intended to be exhaustive.
+
+>[!NOTE]
+>This list is for educational purposes only, it is not intended to endorse any products. Prices shown represent the average across multiple distributors and are for illustrative purposes only.
+
+| Board Name | Price Range (USD) | What is it used for? | Software| Speed | Processor | Memory | Onboard Sensors and Other Features | IO Pins | Video | Radio | Battery Connector? | Operating Voltage | Getting Stated Guides | **Alternatives** |
+| - | - | - | -| - | - | - | - | - | - | - | - | - | - | - |
+| [Azure Sphere MT3620 Dev Kit](https://aka.ms/IotDeviceList/Sphere) | ~$40 - $100 | Highly secure applications | C/C++, VS Code, VS | 500 MHz & 200 MHz | MT3620 (tri-core--1 x Cortex A7, 2 x Cortex M4) | 4-MB RAM + 2 x 64-KB RAM | Certifications: CE/FCC/MIC/RoHS | 4 x Digital IO, 1 x I2S, 4 x ADC, 1 x RTC | - | Dual-band 802.11 b/g/n with antenna diversity | - | 5 V | 1. [Azure Sphere Samples Gallery](https://github.com/Azure/azure-sphere-gallery#azure-sphere-gallery), 2. [Azure Sphere Weather Station](https://www.hackster.io/gatoninja236/azure-sphere-weather-station-d5a2bc)| N/A |
+| [Adafruit HUZZAH32 – ESP32 Feather Board](https://aka.ms/IotDeviceList/AdafruitFeather) | ~$20 - $25 | Monitoring; Beginner IoT; Home automation | Arduino IDE, VS Code | 240 MHz | 32-Bit ESP32 (dual-core Tensilica LX6) | 4 MB SPI Flash, 520 KB SRAM | Hall sensor, 10x capacitive touch IO pins, 50+ add-on boards | 3 x UARTs, 3 x SPI, 2 x I2C, 12 x ADC inputs, 2 x I2S Audio, 2 x DAC | - | 802.11b/g/n HT40 Wi-Fi transceiver, baseband, stack and LWIP, Bluetooth and BLE | √ | 3.3 V | 1. [Scientific freezer monitor](https://www.hackster.io/adi-azulay/azure-edge-impulse-scientific-freezer-monitor-5448ee), 2. [Azure IoT SDK Arduino samples](https://github.com/Azure/azure-sdk-for-c-arduino) | [Arduino Uno WiFi Rev 2 (~$50 - $60)](https://aka.ms/IotDeviceList/ArduinoUnoWifi) |
+| [Arduino Nano 33 BLE Sense](https://aka.ms/IotDeviceList/ArduinoNanoBLE) | ~$30 - $35 | Monitoring; ML; Game controller; Beginner IoT | Arduino IDE, VS Code | 64 MHz | 32-bit Nordic nRF52840 (Cortex M4F) | 1 MB Flash, 256 KB SRAM | 9-axis inertial sensor, Humidity and temp sensor, Barometric sensor, Microphone, Gesture, proximity, light color and light intensity sensor | 14 x Digital IO, 1 x UART, 1 x SPI, 1 x I2C, 8 x ADC input | - | Bluetooth and BLE | - | 3.3 V ΓÇô 21 V | 1. [Connect Nano BLE to Azure IoT Hub](https://create.arduino.cc/projecthub/Arduino_Genuino/securely-connecting-an-arduino-nb-1500-to-azure-iot-hub-af6470), 2. [Monitor beehive with Azure Functions](https://www.hackster.io/clementchamayou/how-to-monitor-a-beehive-with-arduino-nano-33ble-bluetooth-eabc0d) | [Seeed XIAO BLE sense (~$15 - $20)](https://aka.ms/IotDeviceList/SeeedXiao) |
+| [Arduino Nano RP2040 Connect](https://aka.ms/IotDeviceList/ArduinoRP2040Nano) | ~$20 - $25 | Remote control; Monitoring | Arduino IDE, VS Code, C/C++, MicroPython | 133 MHz | 32-bit RP2040 (dual-core Cortex M0+) | 16 MB Flash, 264-kB RAM | Microphone, Six-axis IMU with AI capabilities | 22 x Digital IO, 20 x PWM, 8 x ADC | - | WiFi, Bluetooth | - | 3.3 V | - |[Adafruit Feather RP2040 (NOTE: also need a FeatherWing for WiFi)](https://aka.ms/IotDeviceList/AdafruitRP2040) |
+| [ESP32-S2 Saola-1](https://aka.ms/IotDeviceList/ESPSaola) | ~$10 - $15 | Home automation; Beginner IoT; ML; Monitoring; Mesh networking | Arduino IDE, Circuit Python, ESP IDF | 240 MHz | 32-bit ESP32-S2 (single-core Xtensa LX7) | 128 kB Flash, 320 kB SRAM, 16 kB SRAM (RTC) | 14 x capacitive touch IO pins, Temp sensor | 43 x Digital pins, 8 x PWM, 20 x ADC, 2 x DAC | Serial LCD, Parallel PCD | Wi-Fi 802.11 b/g/n (802.11n up to 150 Mbps) | - | 3.3 V | 1. [Secure face detection with Azure ML](https://www.hackster.io/achindra/microsoft-azure-machine-learning-and-face-detection-in-iot-2de40a), 2. [Azure Cost Monitor](https://www.hackster.io/jenfoxbot/azure-cost-monitor-31811a) | [ESP32-DevKitC (~$10 - $15)](https://aka.ms/IotDeviceList/ESPDevKit) |
+| [Wio Terminal (Seeed Studio)](https://aka.ms/IotDeviceList/WioTerminal) | ~$40 - $50 | Monitoring; Home Automation; ML | Arduino IDE, VS Code, MicroPython, ArduPy | 120 MHz | 32-bit ATSAMD51 (single-core Cortex-M4F) | 4 MB SPI Flash, 192-kB RAM | On-board screen, Microphone, IMU, buzzer, microSD slot, light sensor, IR emitter, Raspberry Pi GPIO mount (as child device) | 26 x Digital Pins, 5 x PWM, 9 x ADC | 2.4" 320x420 Color LCD | dual-band 2.4Ghz/5Ghz (Realtek RTL8720DN) | - | 3.3 V | [Monitor plants with Azure IoT](https://github.com/microsoft/IoT-For-Beginners/tree/main/2-farm/lessons/4-migrate-your-plant-to-the-cloud) | [Adafruit FunHouse (~$30 - $40)](https://aka.ms/IotDeviceList/AdafruitFunhouse) |
+
+## SBC device list
+
+Following is a comparison table of SBCs in alphabetical order. This list isn't intended to be exhaustive.
+
+>[!NOTE]
+>This list is for educational purposes only, it is not intended to endorse any products. Prices shown represent the average across multiple distributors and are for illustrative purposes only.
+
+| Board Name | Price Range (USD) | What is it used for? | Software| Speed | Processor | Memory | Onboard Sensors and Other Features | IO Pins | Video | Radio | Battery Connector? | Operating Voltage | Getting Started Guides | **Alternatives** |
+| - | - | - | -| - | - | - | - | - | - | - | - | - | - | -|
+| [Raspberry Pi 4, Model B](https://aka.ms/IotDeviceList/RpiModelB) | ~$30 - $80 | Home automation; Robotics; Autonomous vehicles; Control systems; Field science | Raspberry Pi OS, Raspbian, Ubuntu 20.04/21.04, RISC OS, Windows 10 IoT, more | 1.5 GHz CPU, 500 MHz GPU | 64-bit Broadcom BCM2711 (quad-core Cortex-A72), VideoCore VI GPU | 2GB/4GB/8GB LPDDR4 RAM, SD Card (not included) | 2 x USB 3 ports, 1 x MIPI DSI display port, 1 x MIPI CSI camera port, 4-pole stereo audio and composite video port, Power over Ethernet (requires HAT) | 26 x Digital, 4 x PWM | 2 micro-HDMI composite, MPI DSI | WiFi, Bluetooth | √ | 5 V | 1. [Send data to IoT Hub](https://www.hackster.io/jenfoxbot/how-to-send-see-data-from-a-raspberry-pi-to-azure-iot-hub-908924), 2. [Monitor plants with Azure IoT](https://github.com/microsoft/IoT-For-Beginners/tree/main/2-farm/lessons/4-migrate-your-plant-to-the-cloud)| [BeagleBone Black Wireless (~$50 - $60)](https://www.beagleboard.org/boards/beaglebone-black-wireless) |
+| [NVIDIA Jetson 2 GB Nano Dev Kit](https://aka.ms/IotDeviceList/NVIDIAJetson) | ~$50 - $100 | AI/ML; Autonomous vehicles | Ubuntu-based JetPack | 1.43 GHz CPU, 921 MHz GPU | 64-bit Nvidia CPU (quad-core Cortex-A57), 128-CUDA-core Maxwell GPU coprocessor | 2GB/4GB LPDDR4 RAM | 472 GFLOPS for AI Perf, 1 x MIPI CSI-2 connector | 28 x Digital, 2 x PWM | HDMI, DP (4 GB only) | Gigabit Ethernet, 802.11ac WiFi | √ | 5 V | [Deepstream integration with Azure IoT Central](https://www.hackster.io/pjdecarlo/nvidia-deepstream-integration-with-azure-iot-central-d9f834) | [BeagleBone AI (~$110 - $120)](https://aka.ms/IotDeviceList/BeagleBoneAI) |
+| [Raspberry Pi Zero W2](https://aka.ms/IotDeviceList/RpiZeroW) | ~$15 - $20 | Home automation; ML; Vehicle modifications; Field Science | Raspberry Pi OS, Raspbian, Ubuntu 20.04/21.04, RISC OS, Windows 10 IoT, more | 1 GHz CPU, 400 MHz GPU | 64-bit Broadcom BCM2837 (quad-core Cortez-A53), VideoCore IV GPU | 512 MB LPDDR2 RAM, SD Card (not included) | 1 x CSI-2 Camera connector | 26 x Digital, 4 x PWM | Mini-HDMI | WiFi, Bluetooth | - | 5 V | [Send and visualize data to Azure IoT Hub](https://www.hackster.io/jenfoxbot/how-to-send-see-data-from-a-raspberry-pi-to-azure-iot-hub-908924) | [Onion Omega2+ (~$10 - $15)](https://onion.io/Omega2/) |
+| [DFRobot LattePanda](https://aka.ms/IotDeviceList/DFRobotLattePanda) | ~$100 - $160 | Home automation; Hyperscale cloud connectivity; AI/ML | Windows 10, Ubuntu 16.04, OpenSuSE 15 | 1.92 GHz | 64-bit Intel Z8350 (quad-core x86-64), Atmega32u4 coprocessor | 2 GB DDR3L RAM, 32 GB eMMC/4GB DDR3L RAM, 64-GB eMMC | - | 6 x Digital (20 x via Atmega32u4), 6 x PWM, 12 x ADC | HDMI, MIPI DSI | WiFi, Bluetooth | √ | 5 V | 1. [Getting started with Microsoft Azure](https://www.hackster.io/45361/dfrobot-lattepanda-with-microsoft-azure-getting-started-0ae8fb), 2. [Home Monitoring System with Azure](https://www.hackster.io/JiongShi/home-monitoring-system-based-on-lattepanda-zigbee-and-azure-ce4e03)| [Seeed Odyssey X86J4125800 (~$210 - $230)](https://aka.ms/IotDeviceList/SeeedOdyssey) |
+
+## Questions? Requests?
+
+Please submit an issue!
+
+## See Also
+
+Other helpful resources include:
+
+- [Overview of Azure IoT device types](./concepts-iot-device-types.md)
+- [Overview of Azure IoT Device SDKs](./iot-sdks.md)
+- [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](./tutorial-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c)
+- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
iot Concepts Iot Device Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-iot-device-types.md
+
+ Title: Overview of Azure IoT device types
+description: Learn the different device types supported by Azure IoT and the tools available.
++++ Last updated : 04/04/2024++
+# Overview of Azure IoT device types
+IoT devices exist across a broad selection of hardware platforms. There are small 8-bit MCUs all the way up to the latest x86 CPUs as found in a desktop computer. Many variables factor into the decision for which hardware you to choose for a IoT device and this article outlined some of the key differences.
+
+## Key hardware differentiators
+Some important factors when choosing your hardware are cost, power consumption, networking, and available inputs and outputs.
+
+* **Cost:** Smaller cheaper devices are typically used when mass producing the final product. However the trade-off is that development of the device can be more expensive given the highly constrained device. The development cost can be spread across all produced devices so the per unit development cost will be low.
+
+* **Power:** How much power a device consumes is important if the device will be utilizing batteries and not connected to the power grid. MCUs are often designed for lower power scenarios and can be a better choice for extending battery life.
+
+* **Network Access:** There are many ways to connect a device to a cloud service. Ethernet, Wi-fi and cellular and some of the available options. The connection type you choose will depend on where the device is deployed and how it's used. For example, cellular can be an attractive option given the high coverage, however for high traffic devices it can an expensive. Hardwired ethernet provides cheaper data costs but with the downside of being less portable.
+
+* **Input and Outputs:** The inputs and outputs available on the device directly affect the devices operating capabilities. A microcontroller will typically have many I/O functions built directly into the chip and provides a wide choice of sensors to connect directly.
+
+## Microcontrollers vs Microprocessors
+IoT devices can be separated into two broad categories, microcontrollers (MCUs) and microprocessors (MPUs).
+
+**MCUs** are less expensive and simpler to operate than MPUs. An MCU will contain many of the functions, such as memory, interfaces, and I/O within the chip itself. An MPU will draw this functionality from components in supporting chips. An MCU will often use a real-time OS (RTOS) or run bare-metal (No OS) and provide real-time response and highly deterministic reactions to external events.
+
+**MPUs** will generally run a general purpose OS, such as Windows, Linux, or MacOSX that provide a non-deterministic real-time response. There's typically no guarantee to when a task will be completed.
++
+Below is a table showing some of the defining differences between an MCU and an MPU based system:
+
+||Microcontroller (MCU)|Microprocessor (MPU)|
+|-|-|-|
+|**CPU**| Less | More |
+|**RAM**| Less | More |
+|**Flash**| Less | More |
+|**OS**| Bare Metal / RTOS | General Purpose (Windows / Linux) |
+|**Development Difficulty**| Harder | Easier |
+|**Power Consumption**| Lower | Higher |
+|**Cost**| Lower | Higher |
+|**Deterministic**| Yes | No - with exceptions |
+|**Device Size**| Smaller | Larger |
+
+## Next steps
+The IoT device type that you choose directly impacts how the device is connected to Azure IoT.
+
+Browse the different [Azure IoT SDKs](./iot-sdks.md) to find the one that best suits your device needs.
iot Concepts Manage Device Reconnections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-manage-device-reconnections.md
+
+ Title: Manage device reconnections to create resilient applications
+
+description: Manage the device connection and reconnection process to ensure resilient applications by using the Azure IoT Hub device SDKs.
+++ Last updated : 04/04/2024+++++
+# Manage device reconnections to create resilient applications
+
+This article provides high-level guidance to help you design resilient applications by adding a device reconnection strategy. It explains why devices disconnect and need to reconnect. And it describes specific strategies that developers can use to reconnect devices that have been disconnected.
+
+## What causes disconnections
+The following are the most common reasons that devices disconnect from IoT Hub:
+
+- Expired SAS token or X.509 certificate. The device's SAS token or X.509 authentication certificate expired.
+- Network interruption. The device's connection to the network is interrupted.
+- Service disruption. The Azure IoT Hub service experiences errors or is temporarily unavailable.
+- Service reconfiguration. After you reconfigure IoT Hub service settings, it can cause devices to require reprovisioning or reconnection.
+
+## Why you need a reconnection strategy
+
+It's important to have a strategy to reconnect devices as described in the following sections. Without a reconnection strategy, you could see a negative effect on your solution's performance, availability, and cost.
+
+### Mass reconnection attempts could cause a DDoS
+
+A high number of connection attempts per second can cause a condition similar to a distributed denial-of-service attack (DDoS). This scenario is relevant for large fleets of devices numbering in the millions. The issue can extend beyond the tenant that owns the fleet, and affect the entire scale-unit. A DDoS could drive a large cost increase for your Azure IoT Hub resources, due to a need to scale out. A DDoS could also hurt your solution's performance due to resource starvation. In the worse case, a DDoS can cause service interruption.
+
+### Hub failure or reconfiguration could disconnect many devices
+
+After an IoT hub experiences a failure, or after you reconfigure service settings on an IoT hub, devices might be disconnected. For proper failover, disconnected devices require reprovisioning. To learn more about failover options, see [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md).
+
+### Reprovisioning many devices could increase costs
+
+After devices disconnect from IoT Hub, the optimal solution is to reconnect the device rather than reprovision it. If you use IoT Hub with DPS, DPS has a per provisioning cost. If you reprovision many devices on DPS, it increases the cost of your IoT solution. To learn more about DPS provisioning costs, see [IoT Hub DPS pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+## Design for resiliency
+
+IoT devices often rely on noncontinuous or unstable network connections (for example, GSM or satellite). Errors can occur when devices interact with cloud-based services because of intermittent service availability and infrastructure-level or transient faults. An application that runs on a device has to manage the mechanisms for connection, reconnection, and the retry logic for sending and receiving messages. Also, the retry strategy requirements depend heavily on the device's IoT scenario, context, capabilities.
+
+The Azure IoT Hub device SDKs aim to simplify connecting and communicating from cloud-to-device and device-to-cloud. These SDKs provide a robust way to connect to Azure IoT Hub and a comprehensive set of options for sending and receiving messages. Developers can also modify existing implementation to customize a better retry strategy for a given scenario.
+
+The relevant SDK features that support connectivity and reliable messaging are available in the following IoT Hub device SDKs. For more information, see the API documentation or specific SDK:
+
+* [C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/main/doc/connection_and_messaging_reliability.md)
+
+* [.NET SDK](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md)
+
+* [Java SDK](https://github.com/Azure/azure-iot-sdk-jav)
+
+* [Node SDK](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries)
+
+* [Python SDK](https://github.com/Azure/azure-iot-sdk-python)
+
+The following sections describe SDK features that support connectivity.
+
+## Connection and retry
+
+This section gives an overview of the reconnection and retry patterns available when managing connections. It details implementation guidance for using a different retry policy in your device application and lists relevant APIs from the device SDKs.
+
+### Error patterns
+
+Connection failures can happen at many levels:
+
+* Network errors: disconnected socket and name resolution errors
+
+* Protocol-level errors for HTTP, AMQP, and MQTT transport: detached links or expired sessions
+
+* Application-level errors that result from either local mistakes: invalid credentials or service behavior (for example, exceeding the quota or throttling)
+
+The device SDKs detect errors at all three levels. However, device SDKs don't detect and handle OS-related errors and hardware errors. The SDK design is based on [The Transient Fault Handling Guidance](/azure/architecture/best-practices/transient-faults#general-guidelines) from the Azure Architecture Center.
+
+### Retry patterns
+
+The following steps describe the retry process when connection errors are detected:
+
+1. The SDK detects the error and the associated error in the network, protocol, or application.
+
+1. The SDK uses the error filter to determine the error type and decide if a retry is needed.
+
+1. If the SDK identifies an **unrecoverable error**, operations like connection, send, and receive are stopped. The SDK notifies the user. Examples of unrecoverable errors include an authentication error and a bad endpoint error.
+
+1. If the SDK identifies a **recoverable error**, it retries according to the specified retry policy until the defined timeout elapses. The SDK uses **Exponential back-off with jitter** retry policy by default.
+
+1. When the defined timeout expires, the SDK stops trying to connect or send. It notifies the user.
+
+1. The SDK allows the user to attach a callback to receive connection status changes.
+
+The SDKs typically provide three retry policies:
+
+* **Exponential back-off with jitter**: This default retry policy tends to be aggressive at the start and slow down over time until it reaches a maximum delay. The design is based on [Retry guidance from Azure Architecture Center](/azure/architecture/best-practices/retry-service-specific).
+
+* **Custom retry**: For some SDK languages, you can design a custom retry policy that is better suited for your scenario and then inject it into the RetryPolicy. Custom retry isn't available on the C SDK, and it isn't currently supported on the Python SDK. The Python SDK reconnects as-needed.
+
+* **No retry**: You can set retry policy to "no retry", which disables the retry logic. The SDK tries to connect once and send a message once, assuming the connection is established. This policy is typically used in scenarios with bandwidth or cost concerns. If you choose this option, messages that fail to send are lost and can't be recovered.
+
+### Retry policy APIs
+
+| SDK | SetRetryPolicy method | Policy implementations | Implementation guidance |
+|||||
+| C | [IOTHUB_CLIENT_RESULT IoTHubDeviceClient_SetRetryPolicy](https://azure.github.io/azure-iot-sdk-c/iothub__device__client_8h.html#a53604d8d75556ded769b7947268beec8) | See: [IOTHUB_CLIENT_RETRY_POLICY](https://azure.github.io/azure-iot-sdk-c/iothub__client__core__common_8h.html#a361221e523247855ff0a05c2e2870e4a) | [C implementation](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md) |
+| Java | [SetRetryPolicy](/jav) |
+| .NET | [DeviceClient.SetRetryPolicy](/dotnet/api/microsoft.azure.devices.client.deviceclient.setretrypolicy) | **Default**: [ExponentialBackoff class](/dotnet/api/microsoft.azure.devices.client.exponentialbackoff)<BR>**Custom:** implement [IRetryPolicy interface](/dotnet/api/microsoft.azure.devices.client.iretrypolicy)<BR>**No retry:** [NoRetry class](/dotnet/api/microsoft.azure.devices.client.noretry) | [C# implementation](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md) |
+| Node | [setRetryPolicy](/javascript/api/azure-iot-device/client#azure-iot-device-client-setretrypolicy) | **Default**: [ExponentialBackoffWithJitter class](/javascript/api/azure-iot-common/exponentialbackoffwithjitter)<BR>**Custom:** implement [RetryPolicy interface](/javascript/api/azure-iot-common/retrypolicy)<BR>**No retry:** [NoRetry class](/javascript/api/azure-iot-common/noretry) | [Node implementation](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries) |
+| Python | Not currently supported | Not currently supported | Built-in connection retries: Dropped connections are retried with a fixed 10-second interval by default. This functionality can be disabled if desired, and the interval can be configured. |
+
+## Hub reconnection flow
+
+If you use IoT Hub only without DPS, use the following reconnection strategy.
+
+When a device fails to connect to IoT Hub, or is disconnected from IoT Hub:
+
+1. Use an exponential back-off with jitter delay function.
+1. Reconnect to IoT Hub.
+
+The following diagram summarizes the reconnection flow:
+++
+## Hub with DPS reconnection flow
+
+If you use IoT Hub with DPS, use the following reconnection strategy.
+
+When a device fails to connect to IoT Hub, or is disconnected from IoT Hub, reconnect based on the following cases:
+
+|Reconnection scenario | Reconnection strategy |
+|||
+|For errors that allow connection retries (HTTP response code 500) | Use an exponential back-off with jitter delay function. <br> Reconnect to IoT Hub. |
+|For errors that indicate a retry is possible, but reconnection has failed 10 consecutive times | Reprovision the device to DPS. |
+|For errors that don't allow connection retries (HTTP responses 401, Unauthorized or 403, Forbidden or 404, Not Found) | Reprovision the device to DPS. |
+
+The following diagram summarizes the reconnection flow:
++
+## Next steps
+
+Suggested next steps include:
+
+- [Troubleshoot device disconnects](../iot-hub/iot-hub-troubleshoot-connectivity.md)
+
+- [Deploy devices at scale](../iot-dps/concepts-deploy-at-scale.md)
iot Concepts Using C Sdk And Embedded C Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-using-c-sdk-and-embedded-c-sdk.md
+
+ Title: C SDK and Embedded C SDK usage scenarios
+description: Helps developers decide which C-based Azure IoT device SDK to use for device development, based on their usage scenario.
++++ Last updated : 04/04/2024+
+#Customer intent: As a device developer, I want to understand when to use the Azure IoT C SDK or the Embedded C SDK to optimize device and application performance.
++
+# C SDK and Embedded C SDK usage scenarios
+
+Microsoft provides Azure IoT device SDKs and middleware for embedded and constrained device scenarios. This article helps device developers decide which one to use for your application.
+
+The following diagram shows four common scenarios in which customers connect devices to Azure IoT, using a C-based (C99) SDK. The rest of this article provides more details on each scenario.
++
+## Scenario 1 ΓÇô Azure IoT C SDK (for Linux and Windows)
+
+Starting in 2015, [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) was the first Azure SDK created to connect devices to IoT services. It's a stable platform that was built to provide the following capabilities for connecting devices to Azure IoT:
+- IoT Hub services
+- Device Provisioning Service clients
+- Three choices of communication transport (MQTT, AMQP and HTTP), which are created and maintained by Microsoft
+- Multiple choices of common TLS stacks (OpenSSL, Schannel and Bed TLS according to the target platform)
+- TCP sockets (Win32, Berkeley or Mbed)
+
+Providing communication transport, TLS and socket abstraction has a performance cost. Many paths require `malloc` and `memcpy` calls between the various abstraction layers. This performance cost is small compared to a desktop or a Raspberry Pi device. Yet on a truly constrained device, the cost becomes significant overhead with the possibility of memory fragmentation. The communication transport layer also requires a `doWork` function to be called at least every 100 milliseconds. These frequent calls make it harder to optimize the SDK for battery powered devices. The existence of multiple abstraction layers also makes it hard for customers to use or change to any given library.
+
+Scenario 1 is recommended for Windows or Linux devices, which normally are less sensitive to memory usage or power consumption. However, Windows and Linux-based devices can also use the Embedded C SDK as shown in Scenario 2. Other options for windows and Linux-based devices include the other Azure IoT device SDKs: [Java SDK](https://github.com/Azure/azure-iot-sdk-java), [.NET SDK](https://github.com/Azure/azure-iot-sdk-csharp), [Node SDK](https://github.com/Azure/azure-iot-sdk-node) and [Python SDK](https://github.com/Azure/azure-iot-sdk-python).
+
+## Scenario 2 ΓÇô Embedded C SDK (for Bare Metal scenarios and micro-controllers)
+
+In 2020, Microsoft released the [Azure SDK for Embedded C](https://github.com/Azure/azure-sdk-for-c/tree/main/sdk/docs/iot) (also known as the Embedded C SDK). This SDK was built based on customers feedback and a growing need to support constrained [micro-controller devices](./concepts-iot-device-types.md#microcontrollers-vs-microprocessors). Typically, constrained micro-controllers have reduced memory and processing power.
+
+The Embedded C SDK has the following key characteristics:
+- No dynamic memory allocation. Customers must allocate data structures where they desire such as in global memory, a heap, or a stack. Then they must pass the address of the allocated structure into SDK functions to initialize and perform various operations.
+- MQTT only. MQTT-only usage is ideal for constrained devices because it's an efficient, lightweight network protocol. Currently only MQTT v3.1.1 is supported.
+- Bring your own network stack. The Embedded C SDK performs no I/O operations. This approach allows customers to select the MQTT, TLS and Socket clients that have the best fit to their target platform.
+- Similar [feature set](./concepts-iot-device-types.md#microcontrollers-vs-microprocessors) as the C SDK. The Embedded C SDK provides similar features as the Azure IoT C SDK, with the following exceptions that the Embedded C SDK doesn't provide:
+ - Upload to blob
+ - The ability to run as an IoT Edge module
+ - AMQP-based features like content message batching and device multiplexing
+- Smaller overall [footprint](https://github.com/Azure/azure-sdk-for-c/tree/main/sdk/docs/iot#size-chart). The Embedded C SDK, as see in a sample that shows how to connect to IoT Hub, can take as little as 74 KB of ROM and 8.26 KB of RAM.
+
+The Embedded C SDK supports micro-controllers with no operating system, micro-controllers with a real-time operating system (like Azure RTOS), Linux, and Windows. Customers can implement custom platform layers to use the SDK on custom devices. The SDK also provides some platform layers such as [Arduino](https://github.com/Azure/azure-sdk-for-c-arduino), and [Swift](https://github.com/Azure-Samples/azure-sdk-for-c-swift). Microsoft encourages the community to submit other platform layers to increase the out-of-the-box supported platforms. Wind River [VxWorks](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/samples/iot/docs/how_to_iot_hub_samples_vxworks.md) is an example of a platform layer submitted by the community.
+
+The Embedded C SDK adds some programming benefits because of its flexibility compared to the Azure IoT C SDK. In particular, applications that use constrained devices will benefit from enormous resource savings and greater programmatic control. In comparison, if you use Azure RTOS or FreeRTOS, you can have these same benefits along with other features per RTOS implementation.
+
+## Scenario 3 ΓÇô Azure RTOS with Azure RTOS middleware (for Azure RTOS-based projects)
+
+Scenario 3 involves using Azure RTOS and the [Azure RTOS middleware](https://github.com/azure-rtos/netxduo/tree/master/addons/azure_iot). Azure RTOS is built on top of the Embedded C SDK, and adds MQTT and TLS Support. The middleware for Azure RTOS exposes APIs for the application that are similar to the native Azure RTOS APIs. This approach makes it simpler for developers to use the APIs and connect their Azure RTOS-based devices to Azure IoT. Azure RTOS is a fully integrated, efficient, real time embedded platform, that provides all the networking and IoT features you need for your solution.
+
+Samples for several popular developer kits from ST, NXP, Renesas, and Microchip, are available. These samples work with Azure IoT Hub or Azure IoT Central, and are available as IAR Workbench or semiconductor IDE projects on [GitHub](https://github.com/azure-rtos/samples).
+
+Because it's based on the Embedded C SDK, the Azure IoT middleware for Azure RTOS is non-memory allocating. Customers must allocate SDK data structures in global memory, or a heap, or a stack. After customers allocate a data structure, they must pass the address of the structure into the SDK functions to initialize and perform various operations.
+
+## Scenario 4 ΓÇô FreeRTOS with FreeRTOS middleware (for use with FreeRTOS-based projects)
+
+Scenario 4 brings the embedded C middleware to FreeRTOS. The embedded C middleware is built on top of the Embedded C SDK and adds MQTT support via the open source coreMQTT library. This middleware for FreeRTOS operates at the MQTT level. It establishes the MQTT connection, subscribes and unsubscribes from topics, and sends and receives messages. Disconnections are handled by the customer via middleware APIs.
+
+Customers control the TLS/TCP configuration and connection to the endpoint. This approach allows for flexibility between software or hardware implementations of either stack. No background tasks are created by the Azure IoT middleware for FreeRTOS. Messages are sent and received synchronously.
+
+The core implementation is provided in this [GitHub repository](https://github.com/Azure/azure-iot-middleware-freertos). Samples for several popular developer kits are available, including the NXP1060, STM32, and ESP32. The samples work with Azure IoT Hub, Azure IoT Central, and Azure Device Provisioning Service, and are available in this [GitHub repository](https://github.com/Azure-Samples/iot-middleware-freertos-samples).
+
+Because it's based on the Azure Embedded C SDK, the Azure IoT middleware for FreeRTOS is also non-memory allocating. Customers must allocate SDK data structures in global memory, or a heap, or a stack. After customers allocate a data structure, they must pass the address of the allocated structures into the SDK functions to initialize and perform various operations.
+
+## C-based SDK technical usage scenarios
+
+The following diagram summarizes technical options for each SDK usage scenario described in this article.
++
+## C-based SDK comparison by memory and protocols
+
+The following table compares the four device SDK development scenarios based on memory and protocol usage.
+
+| &nbsp; | **Memory <br>allocation** | **Memory <br>usage** | **Protocols <br>supported** | **Recommended for** |
+| :-- | :-- | :-- | :-- | :-- |
+| **Azure IoT C SDK** | Mostly Dynamic | Unrestricted. Can span <br>to 1 MB or more in RAM. | AMQP<br>HTTP<br>MQTT v3.1.1 | Microprocessor-based systems<br>Microsoft Windows<br>Linux<br>Apple OS X |
+| **Azure SDK for Embedded C** | Static only | Restricted by amount of <br>data application allocates. | MQTT v3.1.1 | Micro-controllers <br>Bare-metal Implementations <br>RTOS-based implementations |
+| **Azure IoT Middleware for Azure RTOS** | Static only | Restricted | MQTT v3.1.1 | Micro-controllers <br>RTOS-based implementations |
+| **Azure IoT Middleware for FreeRTOS** | Static only | Restricted | MQTT v3.1.1 | Micro-controllers <br>RTOS-based implementations |
+
+## Azure IoT Features Supported by each SDK
+
+The following table compares the four device SDK development scenarios based on support for Azure IoT features.
+
+| &nbsp; | **Azure IoT C SDK** | **Azure SDK for <br>Embedded C** | **Azure IoT <br>middleware for <br>Azure RTOS** | **Azure IoT <br>middleware for <br>FreeRTOS** |
+| :-- | :-- | :-- | :-- | :-- |
+| SAS Client Authentication | Yes | Yes | Yes | Yes |
+| x509 Client Authentication | Yes | Yes | Yes | Yes |
+| Device Provisioning | Yes | Yes | Yes | Yes |
+| Telemetry | Yes | Yes | Yes | Yes |
+| Cloud-to-Device Messages | Yes | Yes | Yes | Yes |
+| Direct Methods | Yes | Yes | Yes | Yes |
+| Device Twin | Yes | Yes | Yes | Yes |
+| IoT Plug-And-Play | Yes | Yes | Yes | Yes |
+| Telemetry batching <br>(AMQP, HTTP) | Yes | No | No | No |
+| Uploads to Azure Blob | Yes | No | No | No |
+| Automatic integration in <br>IoT Edge hosted containers | Yes | No | No | No |
++
+## Next steps
+
+To learn more about device development and the available SDKs for Azure IoT, see the following table.
+- [Azure IoT Device Development](./iot-overview-device-development.md)
+- [Which SDK should I use](./iot-sdks.md)
iot Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-introduction.md
An IoT device is typically made up of a circuit board with sensors attached that
* An accelerometer in an elevator. * Presence sensors in a room.
-There's a wide variety of devices available from different manufacturers to build your solution. For prototyping a microprocessor device, you can use a device such as a [Raspberry Pi](https://www.raspberrypi.org/). The Raspberry Pi lets you attach many different types of sensor. For prototyping a microcontroller device, use devices such as the [ESPRESSIF ESP32](../iot-develop/quickstart-devkit-espressif-esp32-freertos-iot-hub.md), [STMicroelectronics B-U585I-IOT02A Discovery kit](../iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md), [STMicroelectronics B-L4S5I-IOT01A Discovery kit](../iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md), or [NXP MIMXRT1060-EVK Evaluation kit](../iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md). These boards typically have built-in sensors, such as temperature and accelerometer sensors.
+There's a wide variety of devices available from different manufacturers to build your solution. For prototyping a microprocessor device, you can use a device such as a [Raspberry Pi](https://www.raspberrypi.org/). The Raspberry Pi lets you attach many different types of sensor. For prototyping a microcontroller device, use devices such as the [ESPRESSIF ESP32](./tutorial-devkit-espressif-esp32-freertos-iot-hub.md), [STMicroelectronics B-U585I-IOT02A Discovery kit](../iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md), [STMicroelectronics B-L4S5I-IOT01A Discovery kit](../iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md), or [NXP MIMXRT1060-EVK Evaluation kit](../iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md). These boards typically have built-in sensors, such as temperature and accelerometer sensors.
Microsoft provides open-source [Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) that you can use to build the apps that run on your devices.
iot Iot Mqtt Connect To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-connect-to-iot-hub.md
In the **CONNECT** packet, the device should use the following values:
You can also use the cross-platform [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) to quickly generate a SAS token. You can then copy and paste the SAS token into your own code for testing purposes.
-For a tutorial on using MQTT directly, see [Use MQTT to develop an IoT device client without using a device SDK](../iot-develop/tutorial-use-mqtt.md).
+For a tutorial on using MQTT directly, see [Use MQTT to develop an IoT device client without using a device SDK](./tutorial-use-mqtt.md).
### Using the Azure IoT Hub extension for Visual Studio Code
The [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSample)
The C/C++ samples use the [Eclipse Mosquitto](https://mosquitto.org) library, the Python sample uses [Eclipse Paho](https://www.eclipse.org/paho/), and the CLI samples use `mosquitto_pub`.
-To learn more, see [Tutorial - Use MQTT to develop an IoT device client](../iot-develop/tutorial-use-mqtt.md).
+To learn more, see [Tutorial - Use MQTT to develop an IoT device client](./tutorial-use-mqtt.md).
## TLS/SSL configuration
For more information, see [Understand and invoke direct methods from IoT Hub](..
To learn more about using MQTT, see: * [MQTT documentation](https://mqtt.org/)
-* [Use MQTT to develop an IoT device client without using a device SDK](../iot-develop/tutorial-use-mqtt.md)
+* [Use MQTT to develop an IoT device client without using a device SDK](./tutorial-use-mqtt.md)
* [MQTT application samples](https://github.com/Azure-Samples/MqttApplicationSamples) To learn more about using IoT device SDKS, see:
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
A device can establish a secure connection to an IoT hub:
The advantage of using DPS is that you don't need to configure all of your devices with connection-strings that are specific to your IoT hub. Instead, you configure your devices to connect to a well-known, common DPS endpoint where they discover their connection details. To learn more, see [Device Provisioning Service](../iot-dps/about-iot-dps.md).
-To learn more about implementing automatic reconnections to endpoints, see [Manage device reconnections to create resilient applications](../iot-develop/concepts-manage-device-reconnections.md).
+To learn more about implementing automatic reconnections to endpoints, see [Manage device reconnections to create resilient applications](./concepts-manage-device-reconnections.md).
## Device connection strings
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
The [IoT device development](../iot-develop/about-iot-develop.md) site includes
You can find more samples in the [code sample browser](/samples/browse/?expanded=azure&products=azure-iot%2Cazure-iot-edge%2Cazure-iot-pnp%2Cazure-rtos).
-To learn more about implementing automatic reconnections to endpoints, see [Manage device reconnections to create resilient applications](../iot-develop/concepts-manage-device-reconnections.md).
+To learn more about implementing automatic reconnections to endpoints, see [Manage device reconnections to create resilient applications](./concepts-manage-device-reconnections.md).
## Device development without a device SDK
iot Iot Overview Scalability High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-scalability-high-availability.md
You can scale the IoT Hub service vertically and horizontally. For an automated
For a guide to scalability in an IoT Central solution, see [What does it mean for IoT Central to have elastic scale](../iot-central/core/concepts-faq-scalability-availability.md#scalability). If you're using private endpoints with your IoT Central solution, you need to [plan the size of the subnet in your virtual network](../iot-central/core/concepts-private-endpoints.md#plan-the-size-of-the-subnet-in-your-virtual-network).
-For devices that connect to an IoT hub directly or to an IoT hub in an IoT Central application, make sure that the devices continue to connect as your solution scales. To learn more, see [Manage device reconnections after autoscale](../iot-develop/concepts-manage-device-reconnections.md) and [Handle connection failures](../iot-central/core/concepts-device-implementation.md#best-practices).
+For devices that connect to an IoT hub directly or to an IoT hub in an IoT Central application, make sure that the devices continue to connect as your solution scales. To learn more, see [Manage device reconnections after autoscale](./concepts-manage-device-reconnections.md) and [Handle connection failures](../iot-central/core/concepts-device-implementation.md#best-practices).
IoT Edge can help to help scale your solution. IoT Edge lets you move cloud analytics and custom business logic from the cloud to your devices. This approach lets your cloud solution focus on business insights instead of data management. Scale out your IoT solution by packaging your business logic into standard containers, deploy those containers to your devices, and monitor them from the cloud. For more information, see [Azure IoT Edge](../iot-edge/about-iot-edge.md). Service tiers and pricing plans: - [Choose the right IoT Hub tier and size for your solution](../iot-hub/iot-hub-scaling.md)-- [Choose the right pricing plan for your IoT Central solution](../iot-central/core/howto-create-iot-central-application.md#pricing-plans)
+- [Choose the right pricing plan for your IoT Central solution](https://azure.microsoft.com/pricing/details/iot-central/)
Service limits and quotas:
The following tutorials and guides provide more detail and guidance:
- [Tutorial: Perform manual failover for an IoT hub](../iot-hub/tutorial-manual-failover.md) - [How to manually migrate an Azure IoT hub to a new Azure region](../iot-hub/migrate-hub-arm.md)-- [Manage device reconnections to create resilient applications (IoT Hub and IoT Central)](../iot-develop/concepts-manage-device-reconnections.md)
+- [Manage device reconnections to create resilient applications (IoT Hub and IoT Central)](./concepts-manage-device-reconnections.md)
- [IoT Central device best practices](../iot-central/core/concepts-device-implementation.md#best-practices) ## Next steps
iot Iot Overview Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-security.md
Microsoft Defender for IoT can automatically monitor some of the recommendations
- [Export IoT Central data](../iot-central/core/howto-export-to-blob-storage.md) - [Export IoT Central data to a secure destination on an Azure Virtual Network](../iot-central/core/howto-connect-secure-vnet.md) -- **Monitor your IoT solution from the cloud**: Monitor the overall health of your IoT solution using the [IoT Hub metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md) or [Monitor IoT Central application health](../iot-central/core/howto-manage-iot-central-from-portal.md#monitor-application-health).
+- **Monitor your IoT solution from the cloud**: Monitor the overall health of your IoT solution using the [IoT Hub metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md) or [Monitor IoT Central application health](../iot-central/core/howto-manage-and-monitor-iot-central.md#monitor-application-health).
- **Set up diagnostics**: Monitor your operations by logging events in your solution, and then sending the diagnostic logs to Azure Monitor. To learn more, see [Monitor and diagnose problems in your IoT hub](../iot-hub/monitor-iot-hub.md).
iot Iot Overview Solution Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-management.md
While there are tools specifically for [monitoring devices](iot-overview-device-
| IoT Hub | [Use Azure Monitor to monitor your IoT hub](../iot-hub/monitor-iot-hub.md) </br> [Check IoT Hub service and resource health](../iot-hub/iot-hub-azure-service-health-integration.md) | | Device Provisioning Service (DPS) | [Use Azure Monitor to monitor your DPS instance](../iot-dps/monitor-iot-dps.md) | | IoT Edge | [Use Azure Monitor to monitor your IoT Edge fleet](../iot-edge/how-to-collect-and-transport-metrics.md) </br> [Monitor IoT Edge deployments](../iot-edge/how-to-monitor-iot-edge-deployments.md) |
-| IoT Central | [Use audit logs to track activity in your IoT Central application](../iot-central/core/howto-use-audit-logs.md) </br> [Use Azure Monitor to monitor your IoT Central application](../iot-central/core/howto-manage-iot-central-from-portal.md#monitor-application-health) |
+| IoT Central | [Use audit logs to track activity in your IoT Central application](../iot-central/core/howto-use-audit-logs.md) </br> [Use Azure Monitor to monitor your IoT Central application](../iot-central/core/howto-manage-and-monitor-iot-central.md#monitor-application-health) |
| Azure Digital Twins | [Use Azure Monitor to monitor Azure Digital Twins resources](../digital-twins/how-to-monitor.md) | To learn more about the Azure Monitor service, see [Azure Monitor overview](../azure-monitor/overview.md).
The Azure portal offers a consistent GUI environment for managing your Azure IoT
| Action | Links | |--|-|
-| Deploy service instances in your Azure subscription | [Manage your IoT hubs](../iot-hub/iot-hub-create-through-portal.md) </br>[Set up DPS](../iot-dps/quick-setup-auto-provision.md) </br> [Manage IoT Central applications](../iot-central/core/howto-manage-iot-central-from-portal.md) </br> [Set up an Azure Digital Twins instance](../digital-twins/how-to-set-up-instance-portal.md) |
+| Deploy service instances in your Azure subscription | [Manage your IoT hubs](../iot-hub/iot-hub-create-through-portal.md) </br>[Set up DPS](../iot-dps/quick-setup-auto-provision.md) </br> [Manage IoT Central applications](../iot-central/core/howto-manage-and-monitor-iot-central.md) </br> [Set up an Azure Digital Twins instance](../digital-twins/how-to-set-up-instance-portal.md) |
| Configure services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-portal.md) </br> [Deploy IoT Edge modules](../iot-edge/how-to-deploy-at-scale.md) </br> [Configure file uploads (IoT Hub)](../iot-hub/iot-hub-configure-file-upload.md) </br> [Manage device enrollments (DPS)](../iot-dps/how-to-manage-enrollments.md) </br> [Manage allocation policies (DPS)](../iot-dps/how-to-use-allocation-policies.md) | ## ARM templates and Bicep
Use PowerShell to automate the management of your IoT solution. For example, you
| Action | Links | |--|-|
-| Deploy service instances in your Azure subscription | [Create an IoT hub using the New-AzIotHub cmdlet](../iot-hub/iot-hub-create-using-powershell.md) </br> [Create an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-powershell#create-an-application) |
-| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-powershell.md) </br> [Manage an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-powershell#modify-an-application) |
+| Deploy service instances in your Azure subscription | [Create an IoT hub using the New-AzIotHub cmdlet](../iot-hub/iot-hub-create-using-powershell.md) </br> [Create an IoT Central application](../iot-central/core/howto-create-iot-central-application.md?tabs=azure-powershell) |
+| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-powershell.md) </br> [Manage an IoT Central application](../iot-central/core/howto-manage-and-monitor-iot-central.md?tabs=azure-powershell) |
For PowerShell reference documentation, see:
Use the Azure CLI to automate the management of your IoT solution. For example,
| Action | Links | |--|-|
-| Deploy service instances in your Azure subscription | [Create an IoT hub using the Azure CLI](../iot-hub/iot-hub-create-using-cli.md) </br> [Create an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-cli#create-an-application) </br> [Set up an Azure Digital Twins instance](../digital-twins/how-to-set-up-instance-cli.md) </br> [Set up DPS](../iot-dps/quick-setup-auto-provision-cli.md) |
-| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-azure-cli.md) </br> [Deploy and monitor IoT Edge modules at scale](../iot-edge/how-to-deploy-cli-at-scale.md) </br> [Manage an IoT Central application](../iot-central/core/howto-manage-iot-central-from-cli.md?tabs=azure-cli#modify-an-application) </br> [Create an Azure Digital Twins graph](../digital-twins/tutorial-command-line-cli.md) |
+| Deploy service instances in your Azure subscription | [Create an IoT hub using the Azure CLI](../iot-hub/iot-hub-create-using-cli.md) </br> [Create an IoT Central application](../iot-central/core/howto-create-iot-central-application.md) </br> [Set up an Azure Digital Twins instance](../digital-twins/how-to-set-up-instance-cli.md) </br> [Set up DPS](../iot-dps/quick-setup-auto-provision-cli.md) |
+| Manage services | [Create and delete routes and endpoints (IoT Hub)](../iot-hub/how-to-routing-azure-cli.md) </br> [Deploy and monitor IoT Edge modules at scale](../iot-edge/how-to-deploy-cli-at-scale.md) </br> [Manage an IoT Central application](../iot-central/core/howto-manage-and-monitor-iot-central.md) </br> [Create an Azure Digital Twins graph](../digital-twins/tutorial-command-line-cli.md) |
For Azure CLI reference documentation, see:
iot Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-sdks.md
To learn more about how to use the device SDKs, see [What is Azure IoT device an
Use the embedded device SDKs to develop code to run on IoT devices that connect to IoT Hub or IoT Central.
-To learn more about when to use the embedded device SDKs, see [C SDK and Embedded C SDK usage scenarios](../iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md).
+To learn more about when to use the embedded device SDKs, see [C SDK and Embedded C SDK usage scenarios](./concepts-using-c-sdk-and-embedded-c-sdk.md).
### Device SDK lifecycle and support
iot Tutorial Devkit Espressif Esp32 Freertos Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-devkit-espressif-esp32-freertos-iot-hub.md
+
+ Title: Connect an ESPRESSIF ESP-32 to Azure IoT Hub quickstart
+description: Use Azure IoT middleware for FreeRTOS to connect an ESPRESSIF ESP32-Azure IoT Kit device to Azure IoT Hub and send telemetry.
+++
+ms.devlang: c
+ Last updated : 04/04/2024
+#Customer intent: As a device builder, I want to see a working IoT device sample using FreeRTOS to connect to Azure IoT Hub. The device should be able to send telemetry and respond to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
++
+# Quickstart: Connect an ESPRESSIF ESP32-Azure IoT Kit to IoT Hub
+
+In this quickstart, you use the Azure IoT middleware for FreeRTOS to connect the ESPRESSIF ESP32-Azure IoT Kit (from now on, the ESP32 DevKit) to Azure IoT.
+
+You complete the following tasks:
+
+* Install a set of embedded development tools for programming an ESP32 DevKit
+* Build an image and flash it onto the ESP32 DevKit
+* Use Azure CLI to create and manage an Azure IoT hub that the ESP32 DevKit connects to
+* Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device
+
+## Prerequisites
+
+* A PC running Windows 10 or Windows 11
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+ * ESPRESSIF [ESP32-Azure IoT Kit](https://www.espressif.com/products/devkits/esp32-azure-kit/overview)
+ * USB 2.0 A male to Micro USB male cable
+ * Wi-Fi 2.4 GHz
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare the development environment
+
+### Install the tools
+To set up your development environment, first you install the ESPRESSIF ESP-IDF build environment. The installer includes all the tools required to clone, build, flash, and monitor your device.
+
+To install the ESP-IDF tools:
+1. Download and launch the [ESP-IDF v5.0 Offline-installer](https://dl.espressif.com/dl/esp-idf).
+1. When the installer lists components to install, select all components and complete the installation.
++
+### Clone the repo
+
+Clone the following repo to download all sample device code, setup scripts, and SDK documentation. If you previously cloned this repo, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/Azure-Samples/iot-middleware-freertos-samples.git
+```
+
+For Windows 10 and 11, make sure long paths are enabled.
+
+1. To enable long paths, see [Enable long paths in Windows 10](/windows/win32/fileio/maximum-file-path-limitation?tabs=registry).
+1. In git, run the following command in a terminal with administrator permissions:
+
+ ```shell
+ git config --system core.longpaths true
+ ```
++
+## Prepare the device
+To connect the ESP32 DevKit to Azure, you modify configuration settings, build the image, and flash the image to the device.
+
+### Set up the environment
+To launch the ESP-IDF environment:
+1. Select Windows **Start**, find **ESP-IDF 5.0 CMD** and run it.
+1. In **ESP-IDF 5.0 CMD**, navigate to the *iot-middleware-freertos-samples* directory that you cloned previously.
+1. Navigate to the ESP32-Azure IoT Kit project directory *demos\projects\ESPRESSIF\aziotkit*.
+1. Run the following command to launch the configuration menu:
+
+ ```shell
+ idf.py menuconfig
+ ```
+
+### Add configuration
+
+To add wireless network configuration:
+1. In **ESP-IDF 5.0 CMD**, select **Azure IoT middleware for FreeRTOS Sample Configuration >**, and press <kbd>Enter</kbd>.
+1. Set the following configuration settings using your local wireless network credentials.
+
+ |Setting|Value|
+ |-|--|
+ |**WiFi SSID** |{*Your Wi-Fi SSID*}|
+ |**WiFi Password** |{*Your Wi-Fi password*}|
+
+1. Select <kbd>Esc</kbd> to return to the previous menu.
+
+To add configuration to connect to Azure IoT Hub:
+1. Select **Azure IoT middleware for FreeRTOS Main Task Configuration >**, and press <kbd>Enter</kbd>.
+1. Set the following Azure IoT configuration settings to the values that you saved after you created Azure resources.
+
+ |Setting|Value|
+ |-|--|
+ |**Azure IoT Hub FQDN** |{*Your host name*}|
+ |**Azure IoT Device ID** |{*Your Device ID*}|
+ |**Azure IoT Device Symmetric Key** |{*Your primary key*}|
+
+ > [!NOTE]
+ > In the setting **Azure IoT Authentication Method**, confirm that the default value of *Symmetric Key* is selected.
+
+1. Select <kbd>Esc</kbd> to return to the previous menu.
++
+To save the configuration:
+1. Select <kbd>Shift</kbd>+<kbd>S</kbd> to open the save options. This menu lets you save the configuration to a file named *skconfig* in the current *.\aziotkit* directory.
+1. Select <kbd>Enter</kbd> to save the configuration.
+1. Select <kbd>Enter</kbd> to dismiss the acknowledgment message.
+1. Select <kbd>Q</kbd> to quit the configuration menu.
++
+### Build and flash the image
+In this section, you use the ESP-IDF tools to build, flash, and monitor the ESP32 DevKit as it connects to Azure IoT.
+
+> [!NOTE]
+> In the following commands in this section, use a short build output path near your root directory. Specify the build path after the `-B` parameter in each command that requires it. The short path helps to avoid a current issue in the ESPRESSIF ESP-IDF tools that can cause errors with long build path names. The following commands use a local path *C:\espbuild* as an example.
+
+To build the image:
+1. In **ESP-IDF 5.0 CMD**, from the *iot-middleware-freertos-samples\demos\projects\ESPRESSIF\aziotkit* directory, run the following command to build the image.
+
+ ```shell
+ idf.py --no-ccache -B "C:\espbuild" build
+ ```
+
+1. After the build completes, confirm that the binary image file was created in the build path that you specified previously.
+
+ *C:\espbuild\azure_iot_freertos_esp32.bin*
+
+To flash the image:
+1. On the ESP32 DevKit, locate the Micro USB port, which is highlighted in the following image:
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/esp-azure-iot-kit.png" alt-text="Photo of the ESP32-Azure IoT Kit board.":::
+
+1. Connect the Micro USB cable to the Micro USB port on the ESP32 DevKit, and then connect it to your computer.
+1. Open Windows **Device Manager**, and view **Ports** to find out which COM port the ESP32 DevKit is connected to.
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/esp-device-manager.png" alt-text="Screenshot of Windows Device Manager displaying COM port for a connected device.":::
+
+1. In **ESP-IDF 5.0 CMD**, run the following command, replacing the *\<Your-COM-port\>* placeholder and brackets with the correct COM port from the previous step. For example, replace the placeholder with `COM3`.
+
+ ```shell
+ idf.py --no-ccache -B "C:\espbuild" -p <Your-COM-port> flash
+ ```
+
+1. Confirm that the output completes with the following text for a successful flash:
+
+ ```output
+ Hash of data verified
+
+ Leaving...
+ Hard resetting via RTS pin...
+ Done
+ ```
+
+To confirm that the device connects to Azure IoT Central:
+1. In **ESP-IDF 5.0 CMD**, run the following command to start the monitoring tool. As you did in a previous command, replace the \<Your-COM-port\> placeholder, and brackets with the COM port that the device is connected to.
+
+ ```shell
+ idf.py -B "C:\espbuild" -p <Your-COM-port> monitor
+ ```
+
+1. Check for repeating blocks of output similar to the following example. This output confirms that the device connects to Azure IoT and sends telemetry.
+
+ ```output
+ I (50807) AZ IOT: Successfully sent telemetry message
+ I (50807) AZ IOT: Attempt to receive publish message from IoT Hub.
+
+ I (51057) MQTT: Packet received. ReceivedBytes=2.
+ I (51057) MQTT: Ack packet deserialized with result: MQTTSuccess.
+ I (51057) MQTT: State record updated. New state=MQTTPublishDone.
+ I (51067) AZ IOT: Puback received for packet id: 0x00000008
+ I (53067) AZ IOT: Keeping Connection Idle...
+ ```
+
+## View device properties
+
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the ESP32 DevKit. These capabilities rely on the device model published for the ESP32 DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+
+To access IoT Plug and Play components for the device in IoT Explorer:
+
+1. From the home view in IoT Explorer, select **IoT hubs**, then select **View devices in this hub**.
+1. Select your device.
+1. Select **IoT Plug and Play components**.
+1. Select **Default component**. IoT Explorer displays the IoT Plug and Play components that are implemented on your device.
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/iot-explorer-default-component-view.png" alt-text="Screenshot of the device's default component in IoT Explorer.":::
+
+1. On the **Interface** tab, view the JSON content in the device model **Description**. The JSON contains configuration details for each of the IoT Plug and Play components in the device model.
+
+ Each tab in IoT Explorer corresponds to one of the IoT Plug and Play components in the device model.
+
+ | Tab | Type | Name | Description |
+ |||||
+ | **Interface** | Interface | `Espressif ESP32 Azure IoT Kit` | Example device model for the ESP32 DevKit |
+ | **Properties (writable)** | Property | `telemetryFrequencySecs` | The interval that the device sends telemetry |
+ | **Commands** | Command | `ToggleLed1` | Turn the LED on or off |
+ | **Commands** | Command | `ToggleLed2` | Turn the LED on or off |
+ | **Commands** | Command | `DisplayText` | Displays sent text on the device screen |
+
+To view and edit device properties using Azure IoT Explorer:
+
+1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent.
+1. Change the `telemetryFrequencySecs` value to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/iot-explorer-set-telemetry-interval.png" alt-text="Screenshot of setting telemetry interval on the device in IoT Explorer.":::
+
+1. IoT Explorer responds with a notification.
+
+To use Azure CLI to view device properties:
+
+1. In your CLI console, run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
+
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. Inspect the properties for your device in the console output.
+
+> [!TIP]
+> You can also use Azure IoT Explorer to view device properties. In the left navigation select **Device twin**.
+
+## View telemetry
+
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
+
+To view telemetry in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Telemetry** tab. Confirm that **Use built-in event hub** is set to *Yes*.
+1. Select **Start**.
+1. View the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/iot-explorer-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Explorer.":::
+
+1. Select the **Show modeled events** checkbox to view the events in the data format specified by the device model.
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/iot-explorer-show-modeled-events.png" alt-text="Screenshot of modeled telemetry events in IoT Explorer.":::
+
+1. Select **Stop** to end receiving events.
+
+To use Azure CLI to view device telemetry:
+
+1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
+
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. View the JSON output in the console.
+
+ ```json
+ {
+ "event": {
+ "origin": "mydevice",
+ "module": "",
+ "interface": "dtmi:azureiot:devkit:freertos:Esp32AzureIotKit;1",
+ "component": "",
+ "payload": "{\"temperature\":28.6,\"humidity\":25.1,\"light\":116.66,\"pressure\":-33.69,\"altitude\":8764.9,\"magnetometerX\":1627,\"magnetometerY\":28373,\"magnetometerZ\":4232,\"pitch\":6,\"roll\":0,\"accelerometerX\":-1,\"accelerometerY\":0,\"accelerometerZ\":9}"
+ }
+ }
+ ```
+
+1. Select CTRL+C to end monitoring.
++
+## Call a direct method on the device
+
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
+
+To call a method in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Commands** tab.
+1. For the **ToggleLed1** command, select **Send command**. The LED on the ESP32 DevKit toggles on or off. You should also see a notification in IoT Explorer.
+
+ :::image type="content" source="media/tutorial-devkit-espressif-esp32-iot-hub/iot-explorer-invoke-method.png" alt-text="Screenshot of calling a method in IoT Explorer.":::
+
+1. For the **DisplayText** command, enter some text in the **content** field.
+1. Select **Send command**. The text displays on the ESP32 DevKit screen.
++
+To use Azure CLI to call a method:
+
+1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` means the LED toggles to the opposite of its current state.
++
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name ToggleLed2 --method-payload true --hub-name {YourIoTHubName}
+ ```
+
+ The CLI console shows the status of your method call on the device, where `200` indicates success.
+
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
+
+1. Check your device to confirm the LED state.
+
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](../iot-develop/troubleshoot-embedded-device-quickstarts.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
++
+## Next steps
+
+In this quickstart, you built a custom image that contains the Azure IoT middleware for FreeRTOS sample code, and then you flashed the image to the ESP32 DevKit device. You connected the ESP32 DevKit to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling methods on the device.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated general device to IoT Hub](./tutorial-send-telemetry-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](./concepts-using-c-sdk-and-embedded-c-sdk.md)
iot Tutorial Send Telemetry Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-send-telemetry-iot-hub.md
+
+ Title: Send device telemetry to Azure IoT Hub quickstart
+description: "This quickstart shows device developers how to connect a device securely to Azure IoT Hub. You use an Azure IoT device SDK for C, C#, Python, Node.js, or Java, to build a device client for Windows, Linux, or Raspberry Pi (Raspbian). Then you connect and send telemetry."
++++ Last updated : 04/04/2024+
+zone_pivot_groups: iot-develop-set1
+
+ms.devlang: azurecli
+#Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Hub, and send telemetry.
++
+# Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub
+++++++++++++++
+
+## Clean up resources
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete them.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
+1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted.
+
+ ```azurecli-interactive
+ az group list
+ ```
+
+## Next steps
+
+In this quickstart, you learned a basic Azure IoT application workflow for securely connecting a device to the cloud and sending device-to-cloud telemetry. You used Azure CLI to create an Azure IoT hub and a device instance. Then you used an Azure IoT device SDK to create a temperature controller, connect it to the hub, and send telemetry. You also used Azure CLI to monitor telemetry.
+
+As a next step, explore the following articles to learn more about building device solutions with Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Control a device connected to an IoT hub](../iot-hub/quickstart-control-device.md)
+> [!div class="nextstepaction"]
+> [Build a device solution with IoT Hub](set-up-environment.md)
iot Tutorial Use Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-use-mqtt.md
+
+ Title: "Tutorial: Use MQTT to create an IoT device client"
+description: Tutorial - Use the MQTT protocol directly to create an IoT device client without using the Azure IoT Device SDKs
+++ Last updated : 04/04/2024+++
+#Customer intent: As a device builder, I want to see how I can use the MQTT protocol to create an IoT device client without using the Azure IoT Device SDKs.
++
+# Tutorial - Use MQTT to develop an IoT device client without using a device SDK
+
+You should use one of the Azure IoT Device SDKs to build your IoT device clients if at all possible. However, in scenarios such as using a memory constrained device, you may need to use an MQTT library to communicate with your IoT hub.
+
+The samples in this tutorial use the [Eclipse Mosquitto](http://mosquitto.org/) MQTT library.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Build the C language device client sample applications.
+> * Run a sample that uses the MQTT library to send telemetry.
+> * Run a sample that uses the MQTT library to process a cloud-to-device message sent from your IoT hub.
+> * Run a sample that uses the MQTT library to manage the device twin on the device.
+
+You can use either a Windows or Linux development machine to complete the steps in this tutorial.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
++
+### Development machine prerequisites
+
+If you're using Windows:
+
+1. Install [Visual Studio (Community, Professional, or Enterprise)](https://visualstudio.microsoft.com/downloads). Be sure to enable the **Desktop development with C++** workload.
+
+1. Install [CMake](https://cmake.org/download/). Enable the **Add CMake to the system PATH for all users** option.
+
+1. Install the **x64 version** of [Mosquitto](https://mosquitto.org/download/).
+
+If you're using Linux:
+
+1. Run the following command to install the build tools:
+
+ ```bash
+ sudo apt install cmake g++
+ ```
+
+1. Run the following command to install the Mosquitto client library:
+
+ ```bash
+ sudo apt install libmosquitto-dev
+ ```
+
+## Set up your environment
+
+If you don't already have an IoT hub, run the following commands to create a free-tier IoT hub in a resource group called `mqtt-sample-rg`. The command uses the name `my-hub` as an example for the name of the IoT hub to create. Choose a unique name for your IoT hub to use in place of `my-hub`:
+
+```azurecli-interactive
+az group create --name mqtt-sample-rg --location eastus
+az iot hub create --name my-hub --resource-group mqtt-sample-rg --sku F1
+```
+
+Make a note of the name of your IoT hub, you need it later.
+
+Register a device in your IoT hub. The following command registers a device called `mqtt-dev-01` in an IoT hub called `my-hub`. Be sure to use the name of your IoT hub:
+
+```azurecli-interactive
+az iot hub device-identity create --hub-name my-hub --device-id mqtt-dev-01
+```
+
+Use the following command to create a SAS token that grants the device access to your IoT hub. Be sure to use the name of your IoT hub:
+
+```dotnetcli
+az iot hub generate-sas-token --device-id mqtt-dev-01 --hub-name my-hub --du 7200
+```
+
+Make a note of the SAS token the command outputs as you need it later. The SAS token looks like `SharedAccessSignature sr=my-hub.azure-devices.net%2Fdevices%2Fmqtt-dev-01&sig=%2FnM...sNwtnnY%3D&se=1677855761`
+
+> [!TIP]
+> By default, the SAS token is valid for 60 minutes. The `--du 7200` option in the previous command extends the token duration to two hours. If it expires before you're ready to use it, generate a new one. You can also create a token with a longer duration. To learn more, see [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token).
+
+## Clone the sample repository
+
+Use the following command to clone the sample repository to a suitable location on your local machine:
+
+```cmd
+git clone https://github.com/Azure-Samples/IoTMQTTSample.git
+```
+
+The repository also includes:
+
+* A Python sample that uses the `paho-mqtt` library.
+* Instructions for using the `mosquitto_pub` CLI to interact with your IoT hub.
+
+## Build the C samples
+
+Before you build the sample, you need to add the IoT hub and device details. In the cloned IoTMQTTSample repository, open the _mosquitto/src/config.h_ file. Add your IoT hub name, device ID, and SAS token as follows. Be sure to use the name of your IoT hub:
+
+```c
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT License.
+
+#define IOTHUBNAME "my-hub"
+#define DEVICEID "mqtt-dev-01"
+#define SAS_TOKEN "SharedAccessSignature sr=my-hub.azure-devices.net%2Fdevices%2Fmqtt-dev-01&sig=%2FnM...sNwtnnY%3D&se=1677855761"
+
+#define CERTIFICATEFILE CERT_PATH "IoTHubRootCA.crt.pem"
+```
+
+> [!NOTE]
+> The *IoTHubRootCA.crt.pem* file includes the CA root certificates for the TLS connection.
+
+Save the changes to the _mosquitto/src/config.h_ file.
+
+To build the samples, run the following commands in your shell:
+
+```bash
+cd mosquitto
+cmake -Bbuild
+cmake --build build
+```
+
+In Linux, the binaries are in the _./build_ folder underneath the _mosquitto_ folder.
+
+In Windows, the binaries are in the _.\build\Debug_ folder underneath the _mosquitto_ folder.
+
+## Send telemetry
+
+The *mosquitto_telemetry* sample shows how to send a device-to-cloud telemetry message to your IoT hub by using the MQTT library.
+
+Before you run the sample application, run the following command to start the event monitor for your IoT hub. Be sure to use the name of your IoT hub:
+
+```azurecli-interactive
+az iot hub monitor-events --hub-name my-hub
+```
+
+Run the _mosquitto_telemetry_ sample. For example, on Linux:
+
+```bash
+./build/mosquitto_telemetry
+```
+
+The `az iot hub monitor-events` generates the following output that shows the payload sent by the device:
+
+```text
+Starting event monitor, use ctrl-c to stop...
+{
+ "event": {
+ "origin": "mqtt-dev-01",
+ "module": "",
+ "interface": "",
+ "component": "",
+ "payload": "Bonjour MQTT from Mosquitto"
+ }
+}
+```
+
+You can now stop the event monitor.
+
+### Review the code
+
+The following snippets are taken from the _mosquitto/src/mosquitto_telemetry.cpp_ file.
+
+The following statements define the connection information and the name of the MQTT topic you use to send the telemetry message:
+
+```c
+#define HOST IOTHUBNAME ".azure-devices.net"
+#define PORT 8883
+#define USERNAME HOST "/" DEVICEID "/?api-version=2020-09-30"
+
+#define TOPIC "devices/" DEVICEID "/messages/events/"
+```
+
+The `main` function sets the user name and password to authenticate with your IoT hub. The password is the SAS token you created for your device:
+
+```c
+mosquitto_username_pw_set(mosq, USERNAME, SAS_TOKEN);
+```
+
+The sample uses the MQTT topic to send a telemetry message to your IoT hub:
+
+```c
+int msgId = 42;
+char msg[] = "Bonjour MQTT from Mosquitto";
+
+// once connected, we can publish a Telemetry message
+printf("Publishing....\r\n");
+rc = mosquitto_publish(mosq, &msgId, TOPIC, sizeof(msg) - 1, msg, 1, true);
+if (rc != MOSQ_ERR_SUCCESS)
+{
+ return mosquitto_error(rc);
+}
+printf("Publish returned OK\r\n");
+```
+
+To learn more, see [Sending device-to-cloud messages](./iot-mqtt-connect-to-iot-hub.md#sending-device-to-cloud-messages).
+
+## Receive a cloud-to-device message
+
+The *mosquitto_subscribe* sample shows how to subscribe to MQTT topics and receive a cloud-to-device message from your IoT hub by using the MQTT library.
+
+Run the _mosquitto_subscribe_ sample. For example, on Linux:
+
+```bash
+./build/mosquitto_subscribe
+```
+
+Run the following command to send a cloud-to-device message from your IoT hub. Be sure to use the name of your IoT hub:
+
+```azurecli-interactive
+az iot device c2d-message send --hub-name my-hub --device-id mqtt-dev-01 --data "hello world"
+```
+
+The output from _mosquitto_subscribe_ looks like the following example:
+
+```text
+Waiting for C2D messages...
+C2D message 'hello world' for topic 'devices/mqtt-dev-01/messages/devicebound/%24.mid=d411e727-...f98f&%24.to=%2Fdevices%2Fmqtt-dev-01%2Fmessages%2Fdevicebound&%24.ce=utf-8&iothub-ack=none'
+Got message for devices/mqtt-dev-01/messages/# topic
+```
+
+### Review the code
+
+The following snippets are taken from the _mosquitto/src/mosquitto_subscribe.cpp_ file.
+
+The following statement defines the topic filter the device uses to receive cloud to device messages. The `#` is a multi-level wildcard:
+
+```c
+#define DEVICEMESSAGE "devices/" DEVICEID "/messages/#"
+```
+
+The `main` function uses the `mosquitto_message_callback_set` function to set a callback to handle messages sent from your IoT hub and uses the `mosquitto_subscribe` function to subscribe to all messages. The following snippet shows the callback function:
+
+```c
+void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_message* message)
+{
+ printf("C2D message '%.*s' for topic '%s'\r\n", message->payloadlen, (char*)message->payload, message->topic);
+
+ bool match = 0;
+ mosquitto_topic_matches_sub(DEVICEMESSAGE, message->topic, &match);
+
+ if (match)
+ {
+ printf("Got message for " DEVICEMESSAGE " topic\r\n");
+ }
+}
+```
+
+To learn more, see [Use MQTT to receive cloud-to-device messages](./iot-mqtt-connect-to-iot-hub.md#receiving-cloud-to-device-messages).
+
+## Update a device twin
+
+The *mosquitto_device_twin* sample shows how to set a reported property in a device twin and then read the property back.
+
+Run the _mosquitto_device_twin_ sample. For example, on Linux:
+
+```bash
+./build/mosquitto_device_twin
+```
+
+The output from _mosquitto_device_twin_ looks like the following example:
+
+```text
+Setting device twin reported properties....
+Device twin message '' for topic '$iothub/twin/res/204/?$rid=0&$version=2'
+Setting device twin properties SUCCEEDED.
+
+Getting device twin properties....
+Device twin message '{"desired":{"$version":1},"reported":{"temperature":32,"$version":2}}' for topic '$iothub/twin/res/200/?$rid=1'
+Getting device twin properties SUCCEEDED.
+```
+
+### Review the code
+
+The following snippets are taken from the _mosquitto/src/mosquitto_device_twin.cpp_ file.
+
+The following statements define the topics the device uses to subscribe to device twin updates, read the device twin, and update the device twin:
+
+```c
+#define DEVICETWIN_SUBSCRIPTION "$iothub/twin/res/#"
+#define DEVICETWIN_MESSAGE_GET "$iothub/twin/GET/?$rid=%d"
+#define DEVICETWIN_MESSAGE_PATCH "$iothub/twin/PATCH/properties/reported/?$rid=%d"
+```
+
+The `main` function uses the `mosquitto_connect_callback_set` function to set a callback to handle messages sent from your IoT hub and uses the `mosquitto_subscribe` function to subscribe to the `$iothub/twin/res/#` topic.
+
+The following snippet shows the `connect_callback` function that uses `mosquitto_publish` to set a reported property in the device twin. The device publishes the message to the `$iothub/twin/PATCH/properties/reported/?$rid=%d` topic. The `%d` value is incremented each time the device publishes a message to the topic:
+
+```c
+void connect_callback(struct mosquitto* mosq, void* obj, int result)
+{
+ // ... other code ...
+
+ printf("\r\nSetting device twin reported properties....\r\n");
+
+ char msg[] = "{\"temperature\": 32}";
+ char mqtt_publish_topic[64];
+ snprintf(mqtt_publish_topic, sizeof(mqtt_publish_topic), DEVICETWIN_MESSAGE_PATCH, device_twin_request_id++);
+
+ int rc = mosquitto_publish(mosq, NULL, mqtt_publish_topic, sizeof(msg) - 1, msg, 1, true);
+ if (rc != MOSQ_ERR_SUCCESS)
+
+ // ... other code ...
+}
+```
+
+The device subscribes to the `$iothub/twin/res/#` topic and when it receives a message from your IoT hub, the `message_callback` function handles it. When you run the sample, the `message_callback` function gets called twice. The first time, the device receives a response from the IoT hub to the reported property update. The device then requests the device twin. The second time, the device receives the requested device twin. The following snippet shows the `message_callback` function:
+
+```c
+void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_message* message)
+{
+ printf("Device twin message '%.*s' for topic '%s'\r\n", message->payloadlen, (char*)message->payload, message->topic);
+
+ const char patchTwinTopic[] = "$iothub/twin/res/204/?$rid=0";
+ const char getTwinTopic[] = "$iothub/twin/res/200/?$rid=1";
+
+ if (strncmp(message->topic, patchTwinTopic, sizeof(patchTwinTopic) - 1) == 0)
+ {
+ // Process the reported property response and request the device twin
+ printf("Setting device twin properties SUCCEEDED.\r\n\r\n");
+
+ printf("Getting device twin properties....\r\n");
+
+ char msg[] = "{}";
+ char mqtt_publish_topic[64];
+ snprintf(mqtt_publish_topic, sizeof(mqtt_publish_topic), DEVICETWIN_MESSAGE_GET, device_twin_request_id++);
+
+ int rc = mosquitto_publish(mosq, NULL, mqtt_publish_topic, sizeof(msg) - 1, msg, 1, true);
+ if (rc != MOSQ_ERR_SUCCESS)
+ {
+ printf("Error: %s\r\n", mosquitto_strerror(rc));
+ }
+ }
+ else if (strncmp(message->topic, getTwinTopic, sizeof(getTwinTopic) - 1) == 0)
+ {
+ // Process the device twin response and stop the client
+ printf("Getting device twin properties SUCCEEDED.\r\n\r\n");
+
+ mosquitto_loop_stop(mosq, false);
+ mosquitto_disconnect(mosq); // finished, exit program
+ }
+}
+```
+
+To learn more, see [Use MQTT to update a device twin reported property](./iot-mqtt-connect-to-iot-hub.md#update-device-twins-reported-properties) and [Use MQTT to retrieve a device twin property](./iot-mqtt-connect-to-iot-hub.md#retrieving-a-device-twins-properties).
+
+## Clean up resources
++
+## Next steps
+
+Now that you've learned how to use the Mosquitto MQTT library to communicate with IoT Hub, a suggested next step is to review:
+
+> [!div class="nextstepaction"]
+> [Communicate with your IoT hub using the MQTT protocol](./iot-mqtt-connect-to-iot-hub.md)
+> [!div class="nextstepaction"]
+> [MQTT Application samples](https://github.com/Azure-Samples/MqttApplicationSamples)
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
Open the *pom.xml* file in your text editor. Add the following dependency elemen
#### Grant access to your key vault
-Create an access policy for your key vault that grants certificate permissions to your user account.
-
-```azurecli
-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --certificate-permissions delete get list create purge
-```
#### Set environment variables
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-net.md
This quickstart is using Azure Identity library with Azure CLI to authenticate u
2. Sign in with your account credentials in the browser.
-#### Grant access to your key vault
+### Grant access to your key vault
-Create an access policy for your key vault that grants certificate permissions to your user account
-
-```azurecli
-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --certificate-permissions delete get list create purge
-```
### Create new .NET console app
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-node.md
Create a Node.js application that uses your key vault.
npm init -y ``` - ## Install Key Vault packages - 1. Using the terminal, install the Azure Key Vault secrets library, [@azure/keyvault-certificates](https://www.npmjs.com/package/@azure/keyvault-certificates) for Node.js. ```terminal
Create a Node.js application that uses your key vault.
## Grant access to your key vault
-Create a vault access policy for your key vault that grants key permissions to your user account.
-
-```azurecli
-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --certificate-permissions delete get list create purge update
-```
## Set environment variables
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
# Quickstart: Set and retrieve a certificate from Azure Key Vault using Azure PowerShell
-In this quickstart, you create a key vault in Azure Key Vault with Azure PowerShell. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault you may review the [Overview](../general/overview.md). Azure PowerShell is used to create and manage Azure resources using commands or scripts. Once that you have completed that, you will store a certificate.
+In this quickstart, you create a key vault in Azure Key Vault with Azure PowerShell. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, review the [Overview](../general/overview.md). Azure PowerShell is used to create and manage Azure resources using commands or scripts. Afterwards, you store a certificate.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Connect-AzAccount
[!INCLUDE [Create a key vault](../../../includes/key-vault-powershell-kv-creation.md)]
+### Grant access to your key vault
++ ## Add a certificate to Key Vault
-To add a certificate to the vault, you just need to take a couple of additional steps. This certificate could be used by an application.
+To can now add a certificate to the vault. This certificate could be used by an application.
-Type the commands below to create a self-signed certificate with policy called **ExampleCertificate** :
+Use these commands to create a self-signed certificate with policy called **ExampleCertificate** :
```azurepowershell-interactive $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal
To view previously stored certificate:
Get-AzKeyVaultCertificate -VaultName "<your-unique-keyvault-name>" -Name "ExampleCertificate" ```
-Now, you have created a Key Vault, stored a certificate, and retrieved it.
- **Troubleshooting**: Operation returned an invalid status code 'Forbidden'
Set-AzKeyVaultAccessPolicy -VaultName <KeyVaultName> -ObjectId <AzureObjectID> -
## Next steps
-In this quickstart you created a Key Vault and stored a certificate in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a Key Vault and stored a certificate in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/)
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-python.md
This quickstart uses the Azure Identity library with Azure CLI or Azure PowerShe
### Grant access to your key vault
-Create an access policy for your key vault that grants certificate permission to your user account
-
-### [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --certificate-permissions delete get list create
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToCertificates delete,get,list,create
-```
-- ## Create the sample code
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
Previously updated : 01/30/2024 Last updated : 04/04/2024 -+ + # Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control > [!NOTE]
To add role assignments, you must have `Microsoft.Authorization/roleAssignments/
> [!NOTE] > Changing permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is part of [Owner](../../role-based-access-control/built-in-roles.md#owner) and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) roles. Classic subscription administrator roles like 'Service Administrator' and 'Co-Administrator' are not supported.
-1. Enable Azure RBAC permissions on new key vault:
+1. Enable Azure RBAC permissions on new key vault:
![Enable Azure RBAC permissions - new vault](../media/rbac/new-vault.png)
-2. Enable Azure RBAC permissions on existing key vault:
+1. Enable Azure RBAC permissions on existing key vault:
![Enable Azure RBAC permissions - existing vault](../media/rbac/existing-vault.png)
To add role assignments, you must have `Microsoft.Authorization/roleAssignments/
> [!Note] > It's recommended to use the unique role ID instead of the role name in scripts. Therefore, if a role is renamed, your scripts would continue to work. In this document role name is used only for readability.
-Run the following command to create a role assignment:
- # [Azure CLI](#tab/azure-cli)+
+To create a role assignment using the Azure CLI, use the [az role assignment](/cli/azure/role/assignment) command:
+ ```azurecli az role assignment create --role <role_name_or_id> --assignee <assignee> --scope <scope> ```
For full details, see [Assign Azure roles using Azure CLI](../../role-based-acce
# [Azure PowerShell](#tab/azurepowershell)
+To create a role assignment using Azure PowerShell, use the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) cmdlet:
+ ```azurepowershell #Assign by User Principal Name New-AzRoleAssignment -RoleDefinitionName <role_name> -SignInName <assignee_upn> -Scope <scope>
New-AzRoleAssignment -RoleDefinitionName Reader -ApplicationId <applicationId> -
For full details, see [Assign Azure roles using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md). -
+# [Azure portal](#tab/azure-portal)
To assign roles using the Azure portal, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). In the Azure portal, the Azure role assignments screen is available for all resources on the Access control (IAM) tab. ++ ### Resource group scope role assignment
+# [Azure portal](#tab/azure-portal)
+ 1. Go to the Resource Group that contains your key vault. ![Role assignment - resource group](../media/rbac/image-4.png)
To assign roles using the Azure portal, see [Assign Azure roles using the Azure
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) - # [Azure CLI](#tab/azure-cli) ```azurecli az role assignment create --role "Key Vault Reader" --assignee {i.e user@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}
Above role assignment provides ability to list key vault objects in key vault.
### Key Vault scope role assignment
+# [Azure portal](#tab/azure-portal)
+ 1. Go to Key Vault \> Access control (IAM) tab 1. Select **Add** > **Add role assignment** to open the Add role assignment page.
Above role assignment provides ability to list key vault objects in key vault.
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) - # [Azure CLI](#tab/azure-cli) ```azurecli az role assignment create --role "Key Vault Secrets Officer" --assignee {i.e jalichwa@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
For full details, see [Assign Azure roles using Azure PowerShell](../../role-bas
> [!NOTE] > Key vault secret, certificate, key scope role assignments should only be used for limited scenarios described [here](rbac-guide.md?i#best-practices-for-individual-keys-secrets-and-certificates-role-assignments) to comply with security best practices.
+# [Azure portal](#tab/azure-portal)
+ 1. Open a previously created secret. 1. Click the Access control(IAM) tab
For full details, see [Assign Azure roles using Azure PowerShell](../../role-bas
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) - # [Azure CLI](#tab/azure-cli)+ ```azurecli az role assignment create --role "Key Vault Secrets Officer" --assignee {i.e user@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}/secrets/RBACSecret ```
For full details, see [Assign Azure roles using Azure PowerShell](../../role-bas
![Secret tab - error](../media/rbac/image-13.png)
-### Creating custom roles
+### Creating custom roles
[az role definition create command](/cli/azure/role/definition#az-role-definition-create) # [Azure CLI](#tab/azure-cli)+ ```azurecli az role definition create --role-definition '{ \ "Name": "Backup Keys Operator", \
az role definition create --role-definition '{ \
"AssignableScopes": ["/subscriptions/{subscriptionId}"] \ }' ```+ # [Azure PowerShell](#tab/azurepowershell) ```azurepowershell
$roleDefinition | Out-File role.json
New-AzRoleDefinition -InputFile role.json ```+
+# [Azure portal](#tab/azure-portal)
+
+See [Create or update Azure custom roles using the Azure portal](../../role-based-access-control/custom-roles-portal.md).
+ For more Information about how to create custom roles, see: [Azure custom roles](../../role-based-access-control/custom-roles.md)
-## Frequently Asked Questions:
+## Frequently Asked Questions
### Can I use Key Vault role-based access control (RBAC) permission model object-scope assignments to provide isolation for application teams within Key Vault? No. RBAC permission model allows you to assign access to individual objects in Key Vault to user or application, but any administrative operations like network access control, monitoring, and objects management require vault level permissions, which will then expose secure information to operators across application teams.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-java.md
Open the *pom.xml* file in your text editor. Add the following dependency elemen
#### Grant access to your key vault
-Create an access policy for your key vault that grants key permissions to your user account.
-
-```azurecli
-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --key-permissions delete get list create purge
-```
#### Set environment variables
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-net.md
This quickstart is using Azure Identity library with Azure CLI to authenticate u
#### Grant access to your key vault
-Create an access policy for your key vault that grants key permissions to your user account
-
-```azurecli
-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --key-permissions delete get list create purge
-```
### Create new .NET console app
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-node.md
Create a Node.js application that uses your key vault.
## Grant access to your key vault
-Create an access policy for your key vault that grants key permissions to your user account
-
-```azurecli
-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --key-permissions delete get list create update purge
-```
## Set environment variables
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-python.md
This quickstart is using the Azure Identity library with Azure CLI or Azure Powe
### Grant access to your key vault
-Create an access policy for your key vault that grants key permission to your user account.
-
-### [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --key-permissions get list create delete
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToKeys get,list,create,delete
-```
-- ## Create the sample code
Make sure the code in the previous section is in a file named *kv_keys.py*. Then
python kv_keys.py ``` -- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` or `Set-AzKeyVaultAccessPolicy` command](#grant-access-to-your-key-vault).-- Rerunning the code with the same key name may produce the error, "(Conflict) Key \<name\> is currently in a deleted but recoverable state." Use a different key name.
+Rerunning the code with the same key name may produce the error, "(Conflict) Key \<name\> is currently in a deleted but recoverable state." Use a different key name.
## Code details
Remove-AzResourceGroup -Name myResourceGroup
- [Overview of Azure Key Vault](../general/overview.md) - [Secure access to a key vault](../general/security-features.md)
+- [RBAC Guide](../general/rbac-guide.md)
- [Azure Key Vault developer's guide](../general/developers-guide.md)-- [Key Vault security overview](../general/security-features.md) - [Authenticate with Key Vault](../general/authentication.md)
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md
Individual blobs are stored as text, formatted as a JSON. Let's look at an examp
] ``` --
-## Use Azure Monitor logs
-
-You can use the Key Vault solution in Azure Monitor logs to review Managed HSM **AuditEvent** logs. In Azure Monitor logs, you use log queries to analyze data and get the information you need.
-
-For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../key-vault-insights-overview.md).
- ## Next steps - Learn about [best practices](best-practices.md) to provision and use a managed HSM
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure
[!INCLUDE [Create a key vault](../../../includes/key-vault-cli-kv-creation.md)]
+## Give your user account permissions to manage secrets in Key Vault
++ ## Add a secret to Key Vault To add a secret to the vault, you just need to take a couple of additional steps. This password could be used by an application. The password will be called **ExamplePassword** and will store the value of **hVFkk965BuUv** in it.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Open the *pom.xml* file in your text editor. Add the following dependency elemen
#### Grant access to your key vault
-Create an access policy for your key vault that grants secret permissions to your user account.
-
-```azurecli
-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --secret-permissions delete get list set purge
-```
#### Set environment variables
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
This quickstart is using Azure Identity library with Azure CLI to authenticate u
### Grant access to your key vault
-Create an access policy for your key vault that grants secret permissions to your user account
-
-```azurecli
-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --secret-permissions delete get list set purge
-```
### [Azure PowerShell](#tab/azure-powershell)
This quickstart is using Azure Identity library with Azure PowerShell to authent
### Grant access to your key vault
-Create an access policy for your key vault that grants secret permissions to your user account
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy -VaultName "<YourKeyVaultName>" -UserPrincipalName "user@domain.com" -PermissionsToSecrets delete,get,list,set,purge
-```
- ### Create new .NET console app
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-node.md
Create a Node.js application that uses your key vault.
## Grant access to your key vault
-Create a vault access policy for your key vault that grants secret permissions to your user account with the [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) command.
-
-```azurecli
-az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --secret-permissions delete get list set purge update
-```
## Set environment variables
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-portal.md
- Previously updated : 01/30/2024 Last updated : 04/04/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
Sign in to the [Azure portal](https://portal.azure.com).
To add a secret to the vault, follow the steps:
-1. Navigate to your new key vault in the Azure portal
-1. On the Key Vault settings pages, select **Secrets**.
-1. Select on **Generate/Import**.
+1. Navigate to your key vault in the Azure portal:
+1. On the Key Vault left-hand sidebar, select **Objects** then select **Secrets**.
+1. Select **+ Generate/Import**.
1. On the **Create a secret** screen choose the following values: - **Upload options**: Manual. - **Name**: Type a name for the secret. The secret name must be unique within a Key Vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -. For more information on naming, see [Key Vault objects, identifiers, and versioning](../general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning)
- - **Value**: Type a value for the secret. Key Vault APIs accept and return secret values as strings.
+ - **Value**: Type a value for the secret. Key Vault APIs accept and return secret values as strings.
- Leave the other values to their defaults. Select **Create**.
-Once that you receive the message that the secret has been successfully created, you may select on it on the list.
+Once you receive the message that the secret has been successfully created, you may select on it on the list.
For more information on secrets attributes, see [About Azure Key Vault secrets](./about-secrets.md)
If you select on the current version, you can see the value you specified in the
:::image type="content" source="../media/quick-create-portal/current-version-hidden.png" alt-text="Secret properties":::
-By clicking "Show Secret Value" button in the right pane, you can see the hidden value.
+By clicking "Show Secret Value" button in the right pane, you can see the hidden value.
:::image type="content" source="../media/quick-create-portal/current-version-shown.png" alt-text="Secret value appeared":::
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Connect-AzAccount
## Give your user account permissions to manage secrets in Key Vault
-Use the Azure PowerShell [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet to update the Key Vault access policy and grant secret permissions to your user account.
-
-```azurepowershell-interactive
-Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToSecrets get,set,delete
-```
## Adding a secret to Key Vault
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-python.md
Get started with the Azure Key Vault secret client library for Python. Follow th
This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) in a Linux terminal window. - ## Set up your local environment This quickstart is using Azure Identity library with Azure CLI or Azure PowerShell to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
This quickstart is using Azure Identity library with Azure CLI or Azure PowerShe
### Grant access to your key vault
-Create an access policy for your key vault that grants secret permission to your user account.
-
-### [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --secret-permissions delete get list set
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToSecrets delete,get,list,set
-```
-- ## Create the sample code
kubernetes-fleet Concepts Resource Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-resource-propagation.md
Title: "Kubernetes resource propagation from hub cluster to member clusters (preview)"
+ Title: "Kubernetes resource propagation from hub cluster to member clusters (Preview)"
description: This article describes the concept of Kubernetes resource propagation from hub cluster to member clusters. Last updated 03/04/2024
-# Kubernetes resource propagation from hub cluster to member clusters (preview)
+# Kubernetes resource propagation from hub cluster to member clusters (Preview)
+This article describes the concept of Kubernetes resource propagation from hub clusters to member clusters using Azure Kubernetes Fleet Manager (Fleet).
+
+Platform admins often need to deploy Kubernetes resources into multiple clusters for various reasons, for example:
+
+* Managing access control using roles and role bindings across multiple clusters.
+* Running infrastructure applications, such as Prometheus or Flux, that need to be on all clusters.
-Platform admins often need to deploy Kubernetes resources into multiple clusters, for example:
-* Roles and role bindings to manage who can access what.
-* An infrastructure application that needs to be on all clusters, for example, Prometheus, Flux.
+Application developers often need to deploy Kubernetes resources into multiple clusters for various reasons, for example:
-Application developers often need to deploy Kubernetes resources into multiple clusters, for example:
-* Deploy a video serving application into multiple clusters, one per region, for low latency watching experience.
-* Deploy a shopping cart application into two paired regions for customers to continue to shop during a single region outage.
-* Deploy a batch compute application into clusters with inexpensive spot node pools available.
+* Deploying a video serving application into multiple clusters in different regions for a low latency watching experience.
+* Deploying a shopping cart application into two paired regions for customers to continue to shop during a single region outage.
+* Deploying a batch compute application into clusters with inexpensive spot node pools available.
+
+It's tedious to create, update, and track these Kubernetes resources across multiple clusters manually. Fleet provides Kubernetes resource propagation to enable at-scale management of Kubernetes resources. With Fleet, you can create Kubernetes resources in the hub cluster and propagate them to selected member clusters via Kubernetes Custom Resources: `MemberCluster` and `ClusterResourcePlacement`. Fleet supports these custom resources based on an [open-source cloud-native multi-cluster solution][fleet-github]. For more information, see the [upstream Fleet documentation][fleet-github].
+
-It's tedious to create and update these Kubernetes resources across tens or even hundreds of clusters, and track their current status in each cluster.
-Azure Kubernetes Fleet Manager (Fleet) provides Kubernetes resource propagation to enable at-scale management of Kubernetes resources.
+## Resource propagation workflow
-You can create Kubernetes resources in the hub cluster and propagate them to selected member clusters via Kubernetes Customer Resources: `MemberCluster` and `ClusterResourcePlacement`.
-Fleet supports these custom resources based on an [open-source cloud-native multi-cluster solution][fleet-github].
+[![Diagram that shows how Kubernetes resource are propagated to member clusters.](./media/conceptual-resource-propagation.png)](./media/conceptual-resource-propagation.png#lightbox)
-## What is `MemberCluster`?
+## What is a `MemberCluster`?
-Once a cluster joins a fleet, a corresponding `MemberCluster` custom resource is created on the hub cluster.
-You can use it to select target clusters in resource propagation.
+Once a cluster joins a fleet, a corresponding `MemberCluster` custom resource is created on the hub cluster. You can use this custom resource to select target clusters in resource propagation.
-The following labels are added automatically to all member clusters, which can be used for target cluster selection in resource propagation.
+The following labels can be used for target cluster selection in resource propagation and are automatically added to all member clusters:
* `fleet.azure.com/location` * `fleet.azure.com/resource-group` * `fleet.azure.com/subscription-id`
-You can find the API reference of `MemberCluster` [here][membercluster-api].
+For more information, see the [MemberCluster API reference][membercluster-api].
+
+## What is a `ClusterResourcePlacement`?
+
+A `ClusterResourcePlacement` object is used to tell the Fleet scheduler how to place a given set of cluster-scoped objects from the hub cluster into member clusters. Namespace-scoped objects like Deployments, StatefulSets, DaemonSets, ConfigMaps, Secrets, and PersistentVolumeClaims are included when their containing namespace is selected.
+
+With `ClusterResourcePlacement`, you can:
+
+* Select which cluster-scoped Kubernetes resources to propagate to member clusters.
+* Specify placement policies to manually or automatically select a subset or all of the member clusters as target clusters.
+* Specify rollout strategies to safely roll out any updates of the selected Kubernetes resources to multiple target clusters.
+* View the propagation progress towards each target cluster.
+
+The `ClusterResourcePlacement` object supports [using ConfigMap to envelope the object][envelope-object] to help propagate to member clusters without any unintended side effects. Selection methods include:
+
+* **Group, version, and kind**: Select and place all resources of the given type.
+* **Group, version, kind, and name**: Select and place one particular resource of a given type.
+* **Group, version, kind, and labels**: Select and place all resources of a given type that match the labels supplied.
+
+For more information, see the [`ClusterResourcePlacement` API reference][clusterresourceplacement-api].
+
+Once you select the resources, multiple placement policies are available:
+
+* `PickAll` places the resources into all available member clusters. This policy is useful for placing infrastructure workloads, like cluster monitoring or reporting applications.
+* `PickFixed` places the resources into a specific list of member clusters by name.
+* `PickN` is the most flexible placement option and allows for selection of clusters based on affinity or topology spread constraints and is useful when spreading workloads across multiple appropriate clusters to ensure availability is desired.
+
+### `PickAll` placement policy
+
+You can use a `PickAll` placement policy to deploy a workload across all member clusters in the fleet (optionally matching a set of criteria).
+
+The following example shows how to deploy a `test-deployment` namespace and all of its objects across all clusters labeled with `environment: production`:
+
+```yaml
+apiVersion: placement.kubernetes-fleet.io/v1beta1
+kind: ClusterResourcePlacement
+metadata:
+ name: crp-1
+spec:
+ policy:
+ placementType: PickAll
+ affinity:
+ clusterAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ clusterSelectorTerms:
+ - labelSelector:
+ matchLabels:
+ environment: production
+ resourceSelectors:
+ - group: ""
+ kind: Namespace
+ name: prod-deployment
+ version: v1
+```
+
+This simple policy takes the `test-deployment` namespace and all resources contained within it and deploys it to all member clusters in the fleet with the given `environment` label. If all clusters are desired, you can remove the `affinity` term entirely.
+
+### `PickFixed` placement policy
+
+If you want to deploy a workload into a known set of member clusters, you can use a `PickFixed` placement policy to select the clusters by name.
+
+The following example shows how to deploy the `test-deployment` namespace into member clusters `cluster1` and `cluster2`:
+
+```yaml
+apiVersion: placement.kubernetes-fleet.io/v1beta1
+kind: ClusterResourcePlacement
+metadata:
+ name: crp-2
+spec:
+ policy:
+ placementType: PickFixed
+ clusterNames:
+ - cluster1
+ - cluster2
+ resourceSelectors:
+ - group: ""
+ kind: Namespace
+ name: test-deployment
+ version: v1
+```
+
+### `PickN` placement policy
+
+The `PickN` placement policy is the most flexible option and allows for placement of resources into a configurable number of clusters based on both affinities and topology spread constraints.
+
+#### `PickN` with affinities
+
+Using affinities with a `PickN` placement policy functions similarly to using affinities with pod scheduling. You can set both required and preferred affinities. Required affinities prevent placement to clusters that don't match them those specified affinities, and preferred affinities allow for ordering the set of valid clusters when a placement decision is being made.
+
+The following example shows how to deploy a workload into three clusters. Only clusters with the `critical-allowed: "true"` label are valid placement targets, and preference is given to clusters with the label `critical-level: 1`:
+
+```yaml
+apiVersion: placement.kubernetes-fleet.io/v1beta1
+kind: ClusterResourcePlacement
+metadata:
+ name: crp
+spec:
+ resourceSelectors:
+ - ...
+ policy:
+ placementType: PickN
+ numberOfClusters: 3
+ affinity:
+ clusterAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ weight: 20
+ preference:
+ - labelSelector:
+ matchLabels:
+ critical-level: 1
+ requiredDuringSchedulingIgnoredDuringExecution:
+ clusterSelectorTerms:
+ - labelSelector:
+ matchLabels:
+ critical-allowed: "true"
+```
+
+#### `PickN` with topology spread constraints
+
+You can use topology spread constraints to force the division of the cluster placements across topology boundaries to satisfy availability requirements, for example, splitting placements across regions or update rings. You can also configure topology spread constraints to prevent scheduling if the constraint can't be met (`whenUnsatisfiable: DoNotSchedule`) or schedule as best possible (`whenUnsatisfiable: ScheduleAnyway`).
+
+The following example shows how to spread a given set of resources out across multiple regions and attempts to schedule across member clusters with different update days:
+
+```yaml
+apiVersion: placement.kubernetes-fleet.io/v1beta1
+kind: ClusterResourcePlacement
+metadata:
+ name: crp
+spec:
+ resourceSelectors:
+ - ...
+ policy:
+ placementType: PickN
+ topologySpreadConstraints:
+ - maxSkew: 2
+ topologyKey: region
+ whenUnsatisfiable: DoNotSchedule
+ - maxSkew: 2
+ topologyKey: updateDay
+ whenUnsatisfiable: ScheduleAnyway
+```
+
+For more information, see the [upstream topology spread constraints Fleet documentation][crp-topo].
+
+## Update strategy
+
+Fleet uses a rolling update strategy to control how updates are rolled out across multiple cluster placements.
+
+The following example shows how to configure a rolling update strategy using the default settings:
+
+```yaml
+apiVersion: placement.kubernetes-fleet.io/v1beta1
+kind: ClusterResourcePlacement
+metadata:
+ name: crp
+spec:
+ resourceSelectors:
+ - ...
+ policy:
+ ...
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxUnavailable: 25%
+ maxSurge: 25%
+ unavailablePeriodSeconds: 60
+```
+
+The scheduler rolls out updates to each cluster sequentially, waiting at least `unavailablePeriodSeconds` between clusters. Rollout status is considered successful if all resources were correctly applied to the cluster. Rollout status checking doesn't cascade to child resources, for example, it doesn't confirm that pods created by a deployment become ready.
+
+For more information, see the [upstream rollout strategy Fleet documentation][fleet-rollout].
+
+## Placement status
+
+The Fleet scheduler updates details and status on placement decisions onto the `ClusterResourcePlacement` object. You can view this information using the `kubectl describe crp <name>` command. The output includes the following information:
+
+* The conditions that currently apply to the placement, which include if the placement was successfully completed.
+* A placement status section for each member cluster, which shows the status of deployment to that cluster.
+
+The following example shows a `ClusterResourcePlacement` that deployed the `test` namespace and the `test-1` ConfigMap into two member clusters using `PickN`. The placement was successfully completed and the resources were placed into the `aks-member-1` and `aks-member-2` clusters.
+
+```
+Name: crp-1
+Namespace:
+Labels: <none>
+Annotations: <none>
+API Version: placement.kubernetes-fleet.io/v1beta1
+Kind: ClusterResourcePlacement
+Metadata:
+ ...
+Spec:
+ Policy:
+ Number Of Clusters: 2
+ Placement Type: PickN
+ Resource Selectors:
+ Group:
+ Kind: Namespace
+ Name: test
+ Version: v1
+ Revision History Limit: 10
+Status:
+ Conditions:
+ Last Transition Time: 2023-11-10T08:14:52Z
+ Message: found all the clusters needed as specified by the scheduling policy
+ Observed Generation: 5
+ Reason: SchedulingPolicyFulfilled
+ Status: True
+ Type: ClusterResourcePlacementScheduled
+ Last Transition Time: 2023-11-10T08:23:43Z
+ Message: All 2 cluster(s) are synchronized to the latest resources on the hub cluster
+ Observed Generation: 5
+ Reason: SynchronizeSucceeded
+ Status: True
+ Type: ClusterResourcePlacementSynchronized
+ Last Transition Time: 2023-11-10T08:23:43Z
+ Message: Successfully applied resources to 2 member clusters
+ Observed Generation: 5
+ Reason: ApplySucceeded
+ Status: True
+ Type: ClusterResourcePlacementApplied
+ Placement Statuses:
+ Cluster Name: aks-member-1
+ Conditions:
+ Last Transition Time: 2023-11-10T08:14:52Z
+ Message: Successfully scheduled resources for placement in aks-member-1 (affinity score: 0, topology spread score: 0): picked by scheduling policy
+ Observed Generation: 5
+ Reason: ScheduleSucceeded
+ Status: True
+ Type: ResourceScheduled
+ Last Transition Time: 2023-11-10T08:23:43Z
+ Message: Successfully Synchronized work(s) for placement
+ Observed Generation: 5
+ Reason: WorkSynchronizeSucceeded
+ Status: True
+ Type: WorkSynchronized
+ Last Transition Time: 2023-11-10T08:23:43Z
+ Message: Successfully applied resources
+ Observed Generation: 5
+ Reason: ApplySucceeded
+ Status: True
+ Type: ResourceApplied
+ Cluster Name: aks-member-2
+ Conditions:
+ Last Transition Time: 2023-11-10T08:14:52Z
+ Message: Successfully scheduled resources for placement in aks-member-2 (affinity score: 0, topology spread score: 0): picked by scheduling policy
+ Observed Generation: 5
+ Reason: ScheduleSucceeded
+ Status: True
+ Type: ResourceScheduled
+ Last Transition Time: 2023-11-10T08:23:43Z
+ Message: Successfully Synchronized work(s) for placement
+ Observed Generation: 5
+ Reason: WorkSynchronizeSucceeded
+ Status: True
+ Type: WorkSynchronized
+ Last Transition Time: 2023-11-10T08:23:43Z
+ Message: Successfully applied resources
+ Observed Generation: 5
+ Reason: ApplySucceeded
+ Status: True
+ Type: ResourceApplied
+ Selected Resources:
+ Kind: Namespace
+ Name: test
+ Version: v1
+ Kind: ConfigMap
+ Name: test-1
+ Namespace: test
+ Version: v1
+Events:
+ Type Reason Age From Message
+ - - - -
+ Normal PlacementScheduleSuccess 12m (x5 over 3d22h) cluster-resource-placement-controller Successfully scheduled the placement
+ Normal PlacementSyncSuccess 3m28s (x7 over 3d22h) cluster-resource-placement-controller Successfully synchronized the placement
+ Normal PlacementRolloutCompleted 3m28s (x7 over 3d22h) cluster-resource-placement-controller Resources have been applied to the selected clusters
+```
-## What is `ClusterResourcePlacement`?
+## Placement changes
-Fleet provides `ClusterResourcePlacement` as a mechanism to control how cluster-scoped Kubernetes resources are propagated to member clusters.
+The Fleet scheduler prioritizes the stability of existing workload placements. This prioritization can limit the number of changes that cause a workload to be removed and rescheduled. The following scenarios can trigger placement changes:
-Via `ClusterResourcePlacement`, you can:
-- Select which cluster-scoped Kubernetes resources to propagate to member clusters-- Specify placement policies to manually or automatically select a subset or all of the member clusters as target clusters-- Specify rollout strategies to safely roll out any updates of the selected Kubernetes resources to multiple target clusters-- View the propagation progress towards each target cluster
+* Placement policy changes in the `ClusterResourcePlacement` object can trigger removal and rescheduling of a workload.
+ * Scale out operations (increasing `numberOfClusters` with no other changes) place workloads only on new clusters and don't affect existing placements.
+* Cluster changes, including:
+ * A new cluster becoming eligible might trigger placement if it meets the placement policy, for example, a `PickAll` policy.
+ * A cluster with a placement is removed from the fleet will attempt to replace all affected workloads without affecting their other placements.
-In order to propagate namespace-scoped resources, you can select a namespace which by default selecting both the namespace and all the namespace-scoped resources under it.
+Resource-only changes (updating the resources or updating the `ResourceSelector` in the `ClusterResourcePlacement` object) roll out gradually in existing placements but do **not** trigger rescheduling of the workload.
-The following diagram shows a sample `ClusterResourcePlacement`.
-[ ![Diagram that shows how Kubernetes resource are propagated to member clusters.](./media/conceptual-resource-propagation.png) ](./media/conceptual-resource-propagation.png#lightbox)
+## Access the Kubernetes API of the Fleet resource cluster
-You can find the API reference of `ClusterResourcePlacement` [here][clusterresourceplacement-api].
+If you created an Azure Kubernetes Fleet Manager resource with the hub cluster enabled, you can use it to centrally control scenarios like Kubernetes object propagation. To access the Kubernetes API of the Fleet resource cluster, follow the steps in [Access the Kubernetes API of the Fleet resource cluster with Azure Kubernetes Fleet Manager](./quickstart-access-fleet-kubernetes-api.md).
-## Next Steps
+## Next steps
-* [Set up Kubernetes resource propagation from hub cluster to member clusters](./resource-propagation.md).
+[Set up Kubernetes resource propagation from hub cluster to member clusters](./quickstart-resource-propagation.md).
<!-- LINKS - external --> [fleet-github]: https://github.com/Azure/fleet [membercluster-api]: https://github.com/Azure/fleet/blob/main/docs/api-references.md#membercluster
-[clusterresourceplacement-api]: https://github.com/Azure/fleet/blob/main/docs/api-references.md#clusterresourceplacement
+[clusterresourceplacement-api]: https://github.com/Azure/fleet/blob/main/docs/api-references.md#clusterresourceplacement
+[envelope-object]: https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md#envelope-object
+[crp-topo]: https://github.com/Azure/fleet/blob/main/docs/howtos/topology-spread-constraints.md
+[fleet-rollout]: https://github.com/Azure/fleet/blob/main/docs/howtos/crp.md#rollout-strategy
kubernetes-fleet L4 Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/l4-load-balancing.md
You can follow this document to set up layer 4 load balancing for such multi-clu
* These target clusters have to be [added as member clusters to the Fleet resource](./quickstart-create-fleet-and-members.md#join-member-clusters). * These target clusters should be using [Azure CNI (Container Networking Interface) networking](../aks/configure-azure-cni.md).
-* You must gain access to the Kubernetes API of the hub cluster by following the steps in [Access the Kubernetes API of the Fleet resource](./access-fleet-kubernetes-api.md).
+* You must gain access to the Kubernetes API of the hub cluster by following the steps in [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md).
* Set the following environment variables and obtain the kubeconfigs for the fleet and all member clusters:
kubernetes-fleet Quickstart Access Fleet Kubernetes Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-access-fleet-kubernetes-api.md
+
+ Title: "Quickstart: Access the Kubernetes API of the Fleet resource"
+description: Learn how to access the Kubernetes API of the Fleet resource with Azure Kubernetes Fleet Manager.
+ Last updated : 04/01/2024+++++
+# Quickstart: Access the Kubernetes API of the Fleet resource
+
+If your Azure Kubernetes Fleet Manager resource was created with the hub cluster enabled, then it can be used to centrally control scenarios like Kubernetes resource propagation. In this article, you learn how to access the Kubernetes API of the hub cluster managed by the Fleet resource.
+
+## Prerequisites
++
+* You need a Fleet resource with a hub cluster and member clusters. If you don't have one, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI](quickstart-create-fleet-and-members.md).
+* The identity (user or service principal) you're using needs to have the Microsoft.ContainerService/fleets/listCredentials/action on the Fleet resource.
+
+## Access the Kubernetes API of the Fleet resource
+
+1. Set the following environment variables for your subscription ID, resource group, and Fleet resource:
+
+ ```azurecli-interactive
+ export SUBSCRIPTION_ID=<subscription-id>
+ export GROUP=<resource-group-name>
+ export FLEET=<fleet-name>
+ ```
+
+2. Set the default Azure subscription to use using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ az account set --subscription ${SUBSCRIPTION_ID}
+ ```
+
+3. Get the kubeconfig file of the hub cluster Fleet resource using the [`az fleet get-credentials`][az-fleet-get-credentials] command.
+
+ ```azurecli-interactive
+ az fleet get-credentials --resource-group ${GROUP} --name ${FLEET}
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ Merged "hub" as current context in /home/fleet/.kube/config
+ ```
+
+4. Set the following environment variable for the `id` of the hub cluster Fleet resource:
+
+ ```azurecli-interactive
+ export FLEET_ID=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/fleets/${FLEET}
+ ```
+
+5. Authorize your identity to the hub cluster Fleet resource's Kubernetes API server using the following commands:
+
+ For the `ROLE` environment variable, you can use one of the following four built-in role definitions as the value:
+
+ * Azure Kubernetes Fleet Manager RBAC Reader
+ * Azure Kubernetes Fleet Manager RBAC Writer
+ * Azure Kubernetes Fleet Manager RBAC Admin
+ * Azure Kubernetes Fleet Manager RBAC Cluster Admin
+
+ ```azurecli-interactive
+ export IDENTITY=$(az ad signed-in-user show --query "id" --output tsv)
+ export ROLE="Azure Kubernetes Fleet Manager RBAC Cluster Admin"
+ az role assignment create --role "${ROLE}" --assignee ${IDENTITY} --scope ${FLEET_ID}
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ {
+ "canDelegate": null,
+ "condition": null,
+ "conditionVersion": null,
+ "description": null,
+ "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/providers/Microsoft.Authorization/roleAssignments/<assignment>",
+ "name": "<name>",
+ "principalId": "<id>",
+ "principalType": "User",
+ "resourceGroup": "<GROUP>",
+ "roleDefinitionId": "/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Authorization/roleDefinitions/18ab4d3d-a1bf-4477-8ad9-8359bc988f69",
+ "scope": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>",
+ "type": "Microsoft.Authorization/roleAssignments"
+ }
+ ```
+
+6. Verify you can access the API server using the `kubectl get memberclusters` command.
+
+ ```bash
+ kubectl get memberclusters
+ ```
+
+ If successful, your output should look similar to the following example output:
+
+ ```output
+ NAME JOINED AGE
+ aks-member-1 True 2m
+ aks-member-2 True 2m
+ aks-member-3 True 2m
+ ```
+
+## Next steps
+
+* [Propagate resources from a Fleet hub cluster to member clusters](./quickstart-resource-propagation.md).
+
+<!-- LINKS >
+[fleet-apispec]: https://github.com/Azure/fleet/blob/main/docs/api-references.md
+[troubleshooting-guide]: https://github.com/Azure/fleet/blob/main/docs/troubleshooting/README.md
+[az-fleet-get-credentials]: /cli/azure/fleet#az-fleet-get-credentials
+[az-account-set]: /cli/azure/account#az-account-set
kubernetes-fleet Quickstart Create Fleet And Members Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members-portal.md
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure porta
## Prerequisites + * Read the [conceptual overview of this feature](./concepts-fleet.md), which provides an explanation of fleets and member clusters referenced in this document. * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An identity (user or service principal) with the following permissions on the Fleet and AKS resource types for completing the steps listed in this quickstart:
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure porta
## Next steps
-* [Orchestrate updates across multiple member clusters](./update-orchestration.md).
-* [Set up Kubernetes resource propagation from hub cluster to member clusters](./resource-propagation.md).
-* [Set up multi-cluster layer-4 load balancing](./l4-load-balancing.md).
+* [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md).
kubernetes-fleet Quickstart Create Fleet And Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members.md
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure CLI t
## Prerequisites + * Read the [conceptual overview of this feature](./concepts-fleet.md), which provides an explanation of fleets and member clusters referenced in this document. * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An identity (user or service principal) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli). This identity needs to have the following permissions on the Fleet and AKS resource types for completing the steps listed in this quickstart:
Fleet currently supports joining existing AKS clusters as member clusters.
```azurecli-interactive # Join the first member cluster
- az fleet member create \
- --resource-group ${GROUP} \
- --fleet-name ${FLEET} \
- --name ${MEMBER_NAME_1} \
- --member-cluster-id ${MEMBER_CLUSTER_ID_1}
+ az fleet member create --resource-group ${GROUP} --fleet-name ${FLEET} --name ${MEMBER_NAME_1} --member-cluster-id ${MEMBER_CLUSTER_ID_1}
``` Your output should look similar to the following example output:
Fleet currently supports joining existing AKS clusters as member clusters.
## Next steps
-* [Orchestrate updates across multiple member clusters](./update-orchestration.md).
-* [Set up Kubernetes resource propagation from hub cluster to member clusters](./resource-propagation.md).
-* [Set up multi-cluster layer-4 load balancing](./l4-load-balancing.md).
+* [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md).
<!-- INTERNAL LINKS --> [az-extension-add]: /cli/azure/extension#az-extension-add
kubernetes-fleet Quickstart Resource Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-resource-propagation.md
+
+ Title: "Quickstart: Propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters (Preview)"
+description: In this quickstart, you learn how to propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters.
Last updated : 03/28/2024++++++
+# Quickstart: Propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters
+
+In this quickstart, you learn how to propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters.
+
+## Prerequisites
++
+* Read the [resource propagation conceptual overview](./concepts-resource-propagation.md) to understand the concepts and terminology used in this quickstart.
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* You need a Fleet resource with a hub cluster and member clusters. If you don't have one, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI](quickstart-create-fleet-and-members.md).
+* Member clusters must be labeled appropriately in the hub cluster to match the desired selection criteria. Example labels include region, environment, team, availability zones, node availability, or anything else desired.
+* You need access to the Kubernetes API of the hub cluster. If you don't have access, see [Access the Kubernetes API of the Fleet resource with Azure Kubernetes Fleet Manager](./quickstart-access-fleet-kubernetes-api.md).
+
+## Use the `ClusterResourcePlacement` API to propagate resources to member clusters
+
+The `ClusterResourcePlacement` API object is used to propagate resources from a hub cluster to member clusters. The `ClusterResourcePlacement` API object specifies the resources to propagate and the placement policy to use when selecting member clusters. The `ClusterResourcePlacement` API object is created in the hub cluster and is used to propagate resources to member clusters. This example demonstrates how to propagate a namespace to member clusters using the `ClusterResourcePlacement` API object with a `PickAll` placement policy.
+
+For more information, see [Kubernetes resource propagation from hub cluster to member clusters (Preview)](./concepts-resource-propagation.md) and the [upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md).
+
+1. Create a namespace to place onto the member clusters using the `kubectl create namespace` command. The following example creates a namespace named `my-namespace`:
+
+ ```bash
+ kubectl create namespace my-namespace
+ ```
+
+2. Create a `ClusterResourcePlacement` API object in the hub cluster to propagate the namespace to the member clusters and deploy it using the `kubectl apply -f` command. The following example `ClusterResourcePlacement` creates an object named `crp` and uses the `my-namespace` namespace with a `PickAll` placement policy to propagate the namespace to all member clusters:
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: placement.kubernetes-fleet.io/v1beta1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: crp
+ spec:
+ resourceSelectors:
+ - group: ""
+ kind: Namespace
+ version: v1
+ name: my-namespace
+ policy:
+ placementType: PickAll
+ EOF
+ ```
+
+3. Check the progress of the resource propagation using the `kubectl get clusterresourceplacement` command. The following example checks the status of the `ClusterResourcePlacement` object named `crp`:
+
+ ```bash
+ kubectl get clusterresourceplacement crp
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE
+ crp 2 True 2 True 2 10s
+ ```
+
+4. View the details of the `crp` object using the `kubectl describe crp` command. The following example describes the `ClusterResourcePlacement` object named `crp`:
+
+ ```bash
+ kubectl describe clusterresourceplacement crp
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ Name: crp
+ Namespace:
+ Labels: <none>
+ Annotations: <none>
+ API Version: placement.kubernetes-fleet.io/v1beta1
+ Kind: ClusterResourcePlacement
+ Metadata:
+ Creation Timestamp: 2024-04-01T18:55:31Z
+ Finalizers:
+ kubernetes-fleet.io/crp-cleanup
+ kubernetes-fleet.io/scheduler-cleanup
+ Generation: 2
+ Resource Version: 6949
+ UID: 815b1d81-61ae-4fb1-a2b1-06794be3f986
+ Spec:
+ Policy:
+ Placement Type: PickAll
+ Resource Selectors:
+ Group:
+ Kind: Namespace
+ Name: my-namespace
+ Version: v1
+ Revision History Limit: 10
+ Strategy:
+ Type: RollingUpdate
+ Status:
+ Conditions:
+ Last Transition Time: 2024-04-01T18:55:31Z
+ Message: found all the clusters needed as specified by the scheduling policy
+ Observed Generation: 2
+ Reason: SchedulingPolicyFulfilled
+ Status: True
+ Type: ClusterResourcePlacementScheduled
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: All 3 cluster(s) are synchronized to the latest resources on the hub cluster
+ Observed Generation: 2
+ Reason: SynchronizeSucceeded
+ Status: True
+ Type: ClusterResourcePlacementSynchronized
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully applied resources to 3 member clusters
+ Observed Generation: 2
+ Reason: ApplySucceeded
+ Status: True
+ Type: ClusterResourcePlacementApplied
+ Observed Resource Index: 0
+ Placement Statuses:
+ Cluster Name: membercluster1
+ Conditions:
+ Last Transition Time: 2024-04-01T18:55:31Z
+ Message: Successfully scheduled resources for placement in membercluster1 (affinity score: 0, topology spread score: 0): picked by scheduling policy
+ Observed Generation: 2
+ Reason: ScheduleSucceeded
+ Status: True
+ Type: ResourceScheduled
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully Synchronized work(s) for placement
+ Observed Generation: 2
+ Reason: WorkSynchronizeSucceeded
+ Status: True
+ Type: WorkSynchronized
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully applied resources
+ Observed Generation: 2
+ Reason: ApplySucceeded
+ Status: True
+ Type: ResourceApplied
+ Cluster Name: membercluster2
+ Conditions:
+ Last Transition Time: 2024-04-01T18:55:31Z
+ Message: Successfully scheduled resources for placement in membercluster2 (affinity score: 0, topology spread score: 0): picked by scheduling policy
+ Observed Generation: 2
+ Reason: ScheduleSucceeded
+ Status: True
+ Type: ResourceScheduled
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully Synchronized work(s) for placement
+ Observed Generation: 2
+ Reason: WorkSynchronizeSucceeded
+ Status: True
+ Type: WorkSynchronized
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully applied resources
+ Observed Generation: 2
+ Reason: ApplySucceeded
+ Status: True
+ Type: ResourceApplied
+ Cluster Name: membercluster3
+ Conditions:
+ Last Transition Time: 2024-04-01T18:55:31Z
+ Message: Successfully scheduled resources for placement in membercluster3 (affinity score: 0, topology spread score: 0): picked by scheduling policy
+ Observed Generation: 2
+ Reason: ScheduleSucceeded
+ Status: True
+ Type: ResourceScheduled
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully Synchronized work(s) for placement
+ Observed Generation: 2
+ Reason: WorkSynchronizeSucceeded
+ Status: True
+ Type: WorkSynchronized
+ Last Transition Time: 2024-04-01T18:55:36Z
+ Message: Successfully applied resources
+ Observed Generation: 2
+ Reason: ApplySucceeded
+ Status: True
+ Type: ResourceApplied
+ Selected Resources:
+ Kind: Namespace
+ Name: my-namespace
+ Version: v1
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal PlacementScheduleSuccess 108s cluster-resource-placement-controller Successfully scheduled the placement
+ Normal PlacementSyncSuccess 103s cluster-resource-placement-controller Successfully synchronized the placement
+ Normal PlacementRolloutCompleted 103s cluster-resource-placement-controller Resources have been applied to the selected clusters
+ ````
+
+## Clean up resources
+
+If you no longer wish to use the `ClusterResourcePlacement` object, you can delete it using the `kubectl delete` command. The following example deletes the `ClusterResourcePlacement` object named `crp`:
+
+```bash
+kubectl delete clusterresourceplacement crp
+```
+
+## Next steps
+
+To learn more about resource propagation, see the following resources:
+
+* [Kubernetes resource propagation from hub cluster to member clusters (Preview)](./concepts-resource-propagation.md)
+* [Upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md)
kubernetes-fleet Resource Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/resource-propagation.md
- Title: "Using cluster resource propagation (preview)"
-description: Learn how to use Azure Kubernetes Fleet Manager to intelligently place workloads across multiple clusters.
- Previously updated : 03/20/2024----
- - ignite-2023
--
-# Using cluster resource propagation (preview)
-
-Azure Kubernetes Fleet Manager (Fleet) resource propagation, based on an [open-source cloud-native multi-cluster solution][fleet-github] allows for deployment of any Kubernetes objects to fleet member clusters according to specified criteria. Workload orchestration can handle many use cases where an application needs to be deployed across multiple clusters, including the following and more:
--- An infrastructure application that needs to be on all clusters in the fleet-- A web application that should be deployed into multiple clusters in different regions for high availability, and should have updates rolled out in a nondisruptive manner-- A batch compute application that should be deployed into clusters with inexpensive spot node pools available-
-Fleet workload placement can deploy any Kubernetes objects to clusters In order to deploy resources to hub member clusters, the objects must be created in a Fleet hub cluster, and a `ClusterResourcePlacement` object must be created to indicate how the objects should be placed.
-
-[ ![Diagram that shows how Kubernetes resource are propagated to member clusters.](./media/conceptual-resource-propagation.png) ](./media/conceptual-resource-propagation.png#lightbox)
--
-## Prerequisites
--- Read the [conceptual overview of this feature](./concepts-resource-propagation.md), which provides an explanation of `MemberCluster` and `ClusterResourcePlacement` referenced in this document.-- You must have a Fleet resource with a hub cluster and member clusters. If you don't have this resource, follow [Quickstart: Create a Fleet resource and join member clusters](quickstart-create-fleet-and-members.md).-- Member clusters must be labeled appropriately in the hub cluster to match the desired selection criteria. Example labels could include region, environment, team, availability zones, node availability, or anything else desired.-- You must gain access to the Kubernetes API of the hub cluster by following the steps in [Access the Kubernetes API of the Fleet resource](./access-fleet-kubernetes-api.md).-
-## Resource placement with `ClusterResourcePlacement` resources
-
-A `ClusterResourcePlacement` object is used to tell the Fleet scheduler how to place a given set of cluster-scoped objects from the hub cluster into member clusters. Namespace-scoped objects like Deployments, StatefulSets, DaemonSets, ConfigMaps, Secrets, and PersistentVolumeClaims are included when their containing namespace is selected.
-(To propagate to the member clusters without any unintended side effects, the `ClusterResourcePlacement` object supports [using ConfigMap to envelope the object][envelope-object].) Multiple methods of selection can be used:
--- Group, version, and kind - select and place all resources of the given type-- Group, version, kind, and name - select and place one particular resource of a given type-- Group, version, kind, and labels - select and place all resources of a given type that match the labels supplied-
-Once resources are selected, multiple types of placement are available:
--- `PickAll` places the resources into all available member clusters. This policy is useful for placing infrastructure workloads, like cluster monitoring or reporting applications.-- `PickFixed` places the resources into a specific list of member clusters by name.-- `PickN` is the most flexible placement option and allows for selection of clusters based on affinity or topology spread constraints, and is useful when spreading workloads across multiple appropriate clusters to ensure availability is desired.-
-### Using a `PickAll` placement policy
-
-To deploy a workload across all member clusters in the fleet (optionally matching a set of criteria), a `PickAll` placement policy can be used. To deploy the `test-deployment` Namespace and all of the objects in it across all of the clusters labeled with `environment: production`, create a `ClusterResourcePlacement` object as follows:
-
-```yaml
-apiVersion: placement.kubernetes-fleet.io/v1beta1
-kind: ClusterResourcePlacement
-metadata:
- name: crp-1
-spec:
- policy:
- placementType: PickAll
- affinity:
- clusterAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- clusterSelectorTerms:
- - labelSelector:
- matchLabels:
- environment: production
- resourceSelectors:
- - group: ""
- kind: Namespace
- name: prod-deployment
- version: v1
-```
-
-This simple policy takes the `test-deployment` namespace and all resources contained within it and deploys it to all member clusters in the fleet with the given `environment` label. If all clusters are desired, remove the `affinity` term entirely.
-
-### Using a `PickFixed` placement policy
-
-If a workload should be deployed into a known set of member clusters, a `PickFixed` policy can be used to select the clusters by name. This `ClusterResourcePlacement` deploys the `test-deployment` namespace into member clusters `cluster1` and `cluster2`:
-
-```yaml
-apiVersion: placement.kubernetes-fleet.io/v1beta1
-kind: ClusterResourcePlacement
-metadata:
- name: crp-2
-spec:
- policy:
- placementType: PickFixed
- clusterNames:
- - cluster1
- - cluster2
- resourceSelectors:
- - group: ""
- kind: Namespace
- name: test-deployment
- version: v1
-```
-
-### Using a `PickN` placement policy
-
-The `PickN` placement policy is the most flexible option and allows for placement of resources into a configurable number of clusters based on both affinities and topology spread constraints.
-
-#### `PickN` with affinities
-
-Using affinities with `PickN` functions similarly to using affinities with pod scheduling. Both required and preferred affinities can be set. Required affinities prevent placement to clusters that don't match them; preferred affinities allow for ordering the set of valid clusters when a placement decision is being made.
-
-As an example, the following `ClusterResourcePlacement` object places a workload into three clusters. Only clusters that have the label `critical-allowed: "true"` are valid placement targets, with preference given to clusters with the label `critical-level: 1`:
-
-```yaml
-apiVersion: placement.kubernetes-fleet.io/v1beta1
-kind: ClusterResourcePlacement
-metadata:
- name: crp
-spec:
- resourceSelectors:
- - ...
- policy:
- placementType: PickN
- numberOfClusters: 3
- affinity:
- clusterAffinity:
- preferredDuringSchedulingIgnoredDuringExecution:
- weight: 20
- preference:
- - labelSelector:
- matchLabels:
- critical-level: 1
- requiredDuringSchedulingIgnoredDuringExecution:
- clusterSelectorTerms:
- - labelSelector:
- matchLabels:
- critical-allowed: "true"
-```
-
-#### `PickN` with topology spread constraints:
-
-Topology spread constraints can be used to force the division of the cluster placements across topology boundaries to satisfy availability requirements (for example, splitting placements across regions or update rings). Topology spread constraints can also be configured to prevent scheduling if the constraint can't be met (`whenUnsatisfiable: DoNotSchedule`) or schedule as best possible (`whenUnsatisfiable: ScheduleAnyway`).
-
-This `ClusterResourcePlacement` object spreads a given set of resources out across multiple regions and attempts to schedule across member clusters with different update days:
-
-```yaml
-apiVersion: placement.kubernetes-fleet.io/v1beta1
-kind: ClusterResourcePlacement
-metadata:
- name: crp
-spec:
- resourceSelectors:
- - ...
- policy:
- placementType: PickN
- topologySpreadConstraints:
- - maxSkew: 2
- topologyKey: region
- whenUnsatisfiable: DoNotSchedule
- - maxSkew: 2
- topologyKey: updateDay
- whenUnsatisfiable: ScheduleAnyway
-```
-
-For more details on how placement works with topology spread constraints, review the documentation [in the open source fleet project on the topic.][crp-topo].
-
-## Update strategy
-
-Azure Kubernetes Fleet uses a rolling update strategy to control how updates are rolled out across multiple cluster placements. The default settings are in this example:
-
-```yaml
-apiVersion: placement.kubernetes-fleet.io/v1beta1
-kind: ClusterResourcePlacement
-metadata:
- name: crp
-spec:
- resourceSelectors:
- - ...
- policy:
- ...
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 25%
- maxSurge: 25%
- unavailablePeriodSeconds: 60
-```
-
-The scheduler will roll updates to each cluster sequentially, waiting at least `unavailablePeriodSeconds` between clusters. Rollout status is considered successful if all resources were correctly applied to the cluster. Rollout status checking doesn't cascade to child resources - for example, it doesn't confirm that pods created by a deployment become ready.
-
-For more details on cluster rollout strategy, see [the rollout strategy documentation in the open source project.][fleet-rollout]
-
-## Placement status
-
-The fleet scheduler updates details and status on placement decisions onto the `ClusterResourcePlacement` object. This information can be viewed via the `kubectl describe crp <name>` command. The output includes the following information:
--- The conditions that currently apply to the placement, which include if the placement was successfully completed-- A placement status section for each member cluster, which shows the status of deployment to that cluster-
-This example shows a `ClusterResourcePlacement` that deployed the `test` namespace and the `test-1` ConfigMap it contained into two member clusters using `PickN`. The placement was successfully completed and the resources were placed into the `aks-member-1` and `aks-member-2` clusters.
-
-```
-Name: crp-1
-Namespace:
-Labels: <none>
-Annotations: <none>
-API Version: placement.kubernetes-fleet.io/v1beta1
-Kind: ClusterResourcePlacement
-Metadata:
- ...
-Spec:
- Policy:
- Number Of Clusters: 2
- Placement Type: PickN
- Resource Selectors:
- Group:
- Kind: Namespace
- Name: test
- Version: v1
- Revision History Limit: 10
-Status:
- Conditions:
- Last Transition Time: 2023-11-10T08:14:52Z
- Message: found all the clusters needed as specified by the scheduling policy
- Observed Generation: 5
- Reason: SchedulingPolicyFulfilled
- Status: True
- Type: ClusterResourcePlacementScheduled
- Last Transition Time: 2023-11-10T08:23:43Z
- Message: All 2 cluster(s) are synchronized to the latest resources on the hub cluster
- Observed Generation: 5
- Reason: SynchronizeSucceeded
- Status: True
- Type: ClusterResourcePlacementSynchronized
- Last Transition Time: 2023-11-10T08:23:43Z
- Message: Successfully applied resources to 2 member clusters
- Observed Generation: 5
- Reason: ApplySucceeded
- Status: True
- Type: ClusterResourcePlacementApplied
- Placement Statuses:
- Cluster Name: aks-member-1
- Conditions:
- Last Transition Time: 2023-11-10T08:14:52Z
- Message: Successfully scheduled resources for placement in aks-member-1 (affinity score: 0, topology spread score: 0): picked by scheduling policy
- Observed Generation: 5
- Reason: ScheduleSucceeded
- Status: True
- Type: ResourceScheduled
- Last Transition Time: 2023-11-10T08:23:43Z
- Message: Successfully Synchronized work(s) for placement
- Observed Generation: 5
- Reason: WorkSynchronizeSucceeded
- Status: True
- Type: WorkSynchronized
- Last Transition Time: 2023-11-10T08:23:43Z
- Message: Successfully applied resources
- Observed Generation: 5
- Reason: ApplySucceeded
- Status: True
- Type: ResourceApplied
- Cluster Name: aks-member-2
- Conditions:
- Last Transition Time: 2023-11-10T08:14:52Z
- Message: Successfully scheduled resources for placement in aks-member-2 (affinity score: 0, topology spread score: 0): picked by scheduling policy
- Observed Generation: 5
- Reason: ScheduleSucceeded
- Status: True
- Type: ResourceScheduled
- Last Transition Time: 2023-11-10T08:23:43Z
- Message: Successfully Synchronized work(s) for placement
- Observed Generation: 5
- Reason: WorkSynchronizeSucceeded
- Status: True
- Type: WorkSynchronized
- Last Transition Time: 2023-11-10T08:23:43Z
- Message: Successfully applied resources
- Observed Generation: 5
- Reason: ApplySucceeded
- Status: True
- Type: ResourceApplied
- Selected Resources:
- Kind: Namespace
- Name: test
- Version: v1
- Kind: ConfigMap
- Name: test-1
- Namespace: test
- Version: v1
-Events:
- Type Reason Age From Message
- - - - -
- Normal PlacementScheduleSuccess 12m (x5 over 3d22h) cluster-resource-placement-controller Successfully scheduled the placement
- Normal PlacementSyncSuccess 3m28s (x7 over 3d22h) cluster-resource-placement-controller Successfully synchronized the placement
- Normal PlacementRolloutCompleted 3m28s (x7 over 3d22h) cluster-resource-placement-controller Resources have been applied to the selected clusters
-```
-
-## Placement changes
-
-The Fleet scheduler prioritizes the stability of existing workload placements, and thus the number of changes that cause a workload to be removed and rescheduled is limited.
--- Placement policy changes in the `ClusterResourcePlacement` object can trigger removal and rescheduling of a workload
- - Scale out operations (increasing `numberOfClusters` with no other changes) will only place workloads on new clusters and won't affect existing placements.
-- Cluster changes
- - A new cluster becoming eligible may trigger placement if it meets the placement policy - for example, a `PickAll` policy.
- - A cluster with a placement is removed from the fleet will attempt to re-place all affected workloads without affecting their other placements.
-
-Resource-only changes (updating the resources or updating the `ResourceSelector` in the `ClusterResourcePlacement` object) will be rolled out gradually in existing placements but will **not** trigger rescheduling of the workload.
-
-## Access the Kubernetes API of the Fleet resource cluster
-
-If the Azure Kubernetes Fleet Manager resource was created with the hub cluster enabled, then it can be used to centrally control scenarios like Kubernetes object propagation. To access the Kubernetes API of the Fleet resource cluster, follow the steps in the [Access the Kubernetes API of the Fleet resource cluster with Azure Kubernetes Fleet Manager](access-fleet-kubernetes-api.md) article.
-
-## Next steps
-
-* Review the [`ClusterResourcePlacement` documentation and more in the open-source fleet repository][fleet-doc] for more examples
-* Review the [API specifications][fleet-apispec] for all fleet custom resources.
-* Review more information about [the fleet scheduler][fleet-scheduler] and how placement decisions are made.
-* Review our [troubleshooting guide][troubleshooting-guide] to help resolve common issues related to the Fleet APIs.
-
-<!-- LINKS - external -->
-[fleet-github]: https://github.com/Azure/fleet
-[fleet-doc]: https://github.com/Azure/fleet/blob/main/docs/README.md
-[fleet-apispec]: https://github.com/Azure/fleet/blob/main/docs/api-references.md
-[fleet-scheduler]: https://github.com/Azure/fleet/blob/main/docs/concepts/Scheduler/README.md
-[fleet-rollout]: https://github.com/Azure/fleet/blob/main/docs/howtos/crp.md#rollout-strategy
-[crp-topo]: https://github.com/Azure/fleet/blob/main/docs/howtos/topology-spread-constraints.md
-[envelope-object]: https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md#envelope-object
-[troubleshooting-guide]: https://github.com/Azure/fleet/blob/main/docs/troubleshooting/README.md
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 01/09/2024 Last updated : 03/22/2024 # Limits and configuration reference for Azure Logic Apps
The following table lists the values for an **Until** loop:
| Name | Multitenant | Single-tenant | Integration service environment | Notes | ||--|||-| | Trigger - concurrent runs | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 100 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <br><br>To change this value in multitenant Azure Logic Apps, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Maximum waiting runs | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | Concurrency off: <br><br>- Min: 1 run <br>(Default) <br><br>- Max: 50 runs <br>(Default) <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 200 runs <br>(Default) | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <br><br>To change this value in multitenant Azure Logic Apps, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Maximum waiting runs | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 100 runs | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 200 runs <br> | Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br>(Default)<br>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. This setting takes effect only if concurrency is turned on. <br><br>To change this value in multitenant Azure Logic Apps, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
| **SplitOn** items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br>(Default) <br><br>Concurrency on: 100 items <br>(Default) | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. | <a name="throughput-limits"></a>
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
To access data and other resources, a Spark job can use either a managed identit
|Spark pool|Supported identities|Default identity| | - | -- | - |
-|Serverless Spark compute|User identity and managed identity|User identity|
-|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
+|Serverless Spark compute|User identity, user-assigned managed identity attached to the workspace|User identity|
+|Attached Synapse Spark pool|User identity, user-assigned managed identity attached to the attached Synapse Spark pool, system-assigned managed identity of the attached Synapse Spark pool|System-assigned managed identity of the attached Synapse Spark pool|
[This article](./apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs) describes resource access for Spark jobs. In a notebook session, both the serverless Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md).
machine-learning Apache Spark Environment Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-environment-configuration.md
To access data and other resources, Spark jobs can use either a managed identity
|Spark pool|Supported identities|Default identity| | - | -- | - |
-|Serverless Spark compute|User identity and managed identity|User identity|
-|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
+|Serverless Spark compute|User identity, user-assigned managed identity attached to the workspace|User identity|
+|Attached Synapse Spark pool|User identity, user-assigned managed identity attached to the attached Synapse Spark pool, system-assigned managed identity of the attached Synapse Spark pool|System-assigned managed identity of the attached Synapse Spark pool|
If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning serverless Spark compute relies on a user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2, or with `ARMClient`.
If the CLI or SDK code defines an option to use managed identity, Azure Machine
- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning](./interactive-data-wrangling-with-apache-spark-azure-ml.md) - [Submit Spark jobs in Azure Machine Learning](./how-to-submit-spark-jobs.md) - [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)-- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
+- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
machine-learning Concept Endpoints Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-batch.md
description: Learn how Azure Machine Learning uses batch endpoints to simplify m
-+ - devplatv2 - ignite-2023 Previously updated : 04/01/2023 Last updated : 04/04/2024 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it. # Batch endpoints
-After you train a machine learning model, you need to deploy it so that others can consume its predictions. Such execution mode of a model is called *inference*. Azure Machine Learning uses the concept of [endpoints and deployments](concept-endpoints.md) for machine learning models inference.
+Azure Machine Learning allows you to implement *batch endpoints and deployments* to perform long-running, asynchronous inferencing with machine learning models and pipelines. When you train a machine learning model or pipeline, you need to deploy it so that others can use it with new input data to generate predictions. This process of generating predictions with the model or pipeline is called _inferencing_.
-**Batch endpoints** are endpoints that are used to do batch inferencing on large volumes of data over in asynchronous way. Batch endpoints receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
-
-We recommend using them when:
+Batch endpoints receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis. Use batch endpoints when:
> [!div class="checklist"]
-> * You have expensive models or pipelines that requires a longer time to run.
+> * You have expensive models or pipelines that require a longer time to run.
> * You want to operationalize machine learning pipelines and reuse components. > * You need to perform inference over large amounts of data, distributed in multiple files. > * You don't have low latency requirements.
We recommend using them when:
## Batch deployments
-A deployment is a set of resources and computes required to implement the functionality the endpoint provides. Each endpoint can host multiple deployments with different configurations, which helps *decouple the interface* indicated by the endpoint, from *the implementation details* indicated by the deployment. Batch endpoints automatically route the client to the default deployment which can be configured and changed at any time.
+A deployment is a set of resources and computes required to implement the functionality that the endpoint provides. Each endpoint can host several deployments with different configurations, and this functionality helps to *decouple the endpoint's interface* from *the implementation details* that are defined by the deployment. When a batch endpoint is invoked, it automatically routes the client to its default deployment. This default deployment can be configured and changed at any time.
-There are two types of deployments in batch endpoints:
+Two types of deployments are possible in Azure Machine Learning batch endpoints:
-* [Model deployments](#model-deployments)
+* [Model deployment](#model-deployment)
* [Pipeline component deployment](#pipeline-component-deployment)
-### Model deployments
+### Model deployment
-Model deployment allows operationalizing model inference at scale, processing big amounts of data in a low latency and asynchronous way. Scalability is automatically instrumented by Azure Machine Learning by providing parallelization of the inferencing processes across multiple nodes in a compute cluster.
+Model deployment enables the operationalization of model inferencing at scale, allowing you to process large amounts of data in a low latency and asynchronous way. Azure Machine Learning automatically instruments scalability by providing parallelization of the inferencing processes across multiple nodes in a compute cluster.
-Use __Model deployments__ when:
+Use __Model deployment__ when:
> [!div class="checklist"]
-> * You have expensive models that requires a longer time to run inference.
+> * You have expensive models that require a longer time to run inference.
> * You need to perform inference over large amounts of data, distributed in multiple files. > * You don't have low latency requirements. > * You can take advantage of parallelization.
-The main benefit of this kind of deployments is that you can use the very same assets deployed in the online world (Online Endpoints) but now to run at scale in batch. If your model requires simple pre or pos processing, you can [author an scoring script](how-to-batch-scoring-script.md) that performs the data transformations required.
+The main benefit of model deployments is that you can use the same assets that are deployed for real-time inferencing to online endpoints, but now, you get to run them at scale in batch. If your model requires simple preprocessing or post-processing, you can [author an scoring script](how-to-batch-scoring-script.md) that performs the data transformations required.
To create a model deployment in a batch endpoint, you need to specify the following elements:
To create a model deployment in a batch endpoint, you need to specify the follow
### Pipeline component deployment
-Pipeline component deployment allows operationalizing entire processing graphs (pipelines) to perform batch inference in a low latency and asynchronous way.
+Pipeline component deployment enables the operationalization of entire processing graphs (or pipelines) to perform batch inference in a low latency and asynchronous way.
-Use __Pipeline component deployments__ when:
+Use __Pipeline component deployment__ when:
> [!div class="checklist"]
-> * You need to operationalize complete compute graphs that can be decomposed in multiple steps.
+> * You need to operationalize complete compute graphs that can be decomposed into multiple steps.
> * You need to reuse components from training pipelines in your inference pipeline. > * You don't have low latency requirements.
-The main benefit of this kind of deployments is reusability of components already existing in your platform and the capability to operationalize complex inference routines.
+The main benefit of pipeline component deployments is the reusability of components that already exist in your platform and the capability to operationalize complex inference routines.
To create a pipeline component deployment in a batch endpoint, you need to specify the following elements:
To create a pipeline component deployment in a batch endpoint, you need to speci
> [!div class="nextstepaction"] > [Create your first pipeline component deployment](how-to-use-batch-pipeline-deployments.md)
-Batch endpoints also allow you to [create Pipeline component deployments from an existing pipeline job](how-to-use-batch-pipeline-from-job.md). When doing that, Azure Machine Learning automatically creates a Pipeline component out of the job. This simplifies the use of these kinds of deployments. However, it is a best practice to always [create pipeline components explicitly to streamline your MLOps practice](how-to-use-batch-pipeline-deployments.md).
+Batch endpoints also allow you to [Create pipeline component deployments from an existing pipeline job](how-to-use-batch-pipeline-from-job.md). When doing that, Azure Machine Learning automatically creates a pipeline component out of the job. This simplifies the use of these kinds of deployments. However, it's a best practice to always [create pipeline components explicitly to streamline your MLOps practice](how-to-use-batch-pipeline-deployments.md).
## Cost management
-Invoking a batch endpoint triggers an asynchronous batch inference job. Compute resources are automatically provisioned when the job starts, and automatically de-allocated as the job completes. So you only pay for compute when you use it.
+Invoking a batch endpoint triggers an asynchronous batch inference job. Azure Machine Learning automatically provisions compute resources when the job starts, and automatically deallocates them as the job completes. This way, you only pay for compute when you use it.
> [!TIP]
-> When deploying models, you can [override compute resource settings](how-to-use-batch-endpoint.md#overwrite-deployment-configuration-per-each-job) (like instance count) and advanced settings (like mini batch size, error threshold, and so on) for each individual batch inference job to speed up execution and reduce cost if you know that you can take advantage of specific configurations.
+> When deploying models, you can [override compute resource settings](how-to-use-batch-endpoint.md#overwrite-deployment-configuration-per-each-job) (like instance count) and advanced settings (like mini batch size, error threshold, and so on) for each individual batch inference job. By taking advantage of these specific configurations, you might be able to speed up execution and reduce cost.
-Batch endpoints also can run on low-priority VMs. Batch endpoints can automatically recover from deallocated VMs and resume the work from where it was left when deploying models for inference. See [Use low-priority VMs in batch endpoints](how-to-use-low-priority-batch.md).
+Batch endpoints can also run on low-priority VMs. Batch endpoints can automatically recover from deallocated VMs and resume the work from where it was left when deploying models for inference. For more information on how to use low priority VMs to reduce the cost of batch inference workloads, see [Use low-priority VMs in batch endpoints](how-to-use-low-priority-batch.md).
-Finally, Azure Machine Learning doesn't charge for batch endpoints or batch deployments themselves, so you can organize your endpoints and deployments as best suits your scenario. Endpoints and deployment can use independent or shared clusters, so you can achieve fine grained control over which compute the produced jobs consume. Use __scale-to-zero__ in clusters to ensure no resources are consumed when they are idle.
+Finally, Azure Machine Learning doesn't charge you for batch endpoints or batch deployments themselves, so you can organize your endpoints and deployments as best suits your scenario. Endpoints and deployments can use independent or shared clusters, so you can achieve fine-grained control over which compute the jobs consume. Use __scale-to-zero__ in clusters to ensure no resources are consumed when they're idle.
## Streamline the MLOps practice
You can add, remove, and update deployments without affecting the endpoint itsel
## Flexible data sources and storage
-Batch endpoints reads and write data directly from storage. You can indicate Azure Machine Learning datastores, Azure Machine Learning data asset, or Storage Accounts as inputs. For more information on supported input options and how to indicate them, see [Create jobs and input data to batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+Batch endpoints read and write data directly from storage. You can specify Azure Machine Learning datastores, Azure Machine Learning data assets, or Storage Accounts as inputs. For more information on the supported input options and how to specify them, see [Create jobs and input data to batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
## Security
-Batch endpoints provide all the capabilities required to operate production level workloads in an enterprise setting. They support [private networking](how-to-secure-batch-endpoint.md) on secured workspaces and [Microsoft Entra authentication](how-to-authenticate-batch-endpoint.md), either using a user principal (like a user account) or a service principal (like a managed or unmanaged identity). Jobs generated by a batch endpoint run under the identity of the invoker which gives you flexibility to implement any scenario. See [How to authenticate to batch endpoints](how-to-authenticate-batch-endpoint.md) for details.
+Batch endpoints provide all the capabilities required to operate production level workloads in an enterprise setting. They support [private networking](how-to-secure-batch-endpoint.md) on secured workspaces and [Microsoft Entra authentication](how-to-authenticate-batch-endpoint.md), either using a user principal (like a user account) or a service principal (like a managed or unmanaged identity). Jobs generated by a batch endpoint run under the identity of the invoker, which gives you the flexibility to implement any scenario. For more information on authorization while using batch endpoints, see [How to authenticate on batch endpoints](how-to-authenticate-batch-endpoint.md).
> [!div class="nextstepaction"] > [Configure network isolation in Batch Endpoints](how-to-secure-batch-endpoint.md)
-## Next steps
+## Related content
- [Deploy models with batch endpoints](how-to-use-batch-model-deployments.md) - [Deploy pipelines with batch endpoints](how-to-use-batch-pipeline-deployments.md)
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
Title: Plan to manage costs
-description: Plan and manage costs for Azure Machine Learning with cost analysis in Azure portal. Learn further cost-saving tips to lower your cost when building ML models.
+description: Plan to manage costs for Azure Machine Learning with cost analysis in the Azure portal. Learn further cost-saving tips for building ML models.
Previously updated : 03/11/2024 Last updated : 03/26/2024 # Plan to manage costs for Azure Machine Learning
-This article describes how to plan and manage costs for Azure Machine Learning. First, you use the Azure pricing calculator to help plan for costs before you add any resources. Next, as you add the Azure resources, review the estimated costs.
+This article describes how to plan and manage costs for Azure Machine Learning. First, use the Azure pricing calculator to help plan for costs before you add any resources. Next, review the estimated costs while you add Azure resources.
-After you've started using Azure Machine Learning resources, use the cost management features to set budgets and monitor costs. Also review the forecasted costs and identify spending trends to identify areas where you might want to act.
+After you start using Azure Machine Learning resources, use the cost management features to set budgets and monitor costs. Also, review the forecasted costs and identify spending trends to identify areas where you might want to act.
-Understand that the costs for Azure Machine Learning are only a portion of the monthly costs in your Azure bill. If you're using other Azure services, you're billed for all the Azure services and resources used in your Azure subscription, including the third-party services. This article explains how to plan for and manage costs for Azure Machine Learning. After you're familiar with managing costs for Azure Machine Learning, apply similar methods to manage costs for all the Azure services used in your subscription.
+Understand that the costs for Azure Machine Learning are only a portion of the monthly costs in your Azure bill. If you use other Azure services, you're billed for all the Azure services and resources used in your Azure subscription, including third-party services. This article explains how to plan for and manage costs for Azure Machine Learning. After you're familiar with managing costs for Azure Machine Learning, apply similar methods to manage costs for all the Azure services used in your subscription.
-For more information on optimizing costs, see [how to manage and optimize cost in Azure Machine Learning](how-to-manage-optimize-cost.md).
+For more information on optimizing costs, see [Manage and optimize Azure Machine Learning costs](how-to-manage-optimize-cost.md).
## Prerequisites
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Cost analysis in Microsoft Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+To view cost data, you need at least *read* access for an Azure account. For information about assigning access to Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Estimate costs before using Azure Machine Learning -- Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you create the resources in an Azure Machine Learning workspace.
-On the left, select **AI + Machine Learning**, then select **Azure Machine Learning** to begin.
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you create resources in an Azure Machine Learning workspace. On the left side of the pricing calculator, select **AI + Machine Learning**, then select **Azure Machine Learning** to begin.
-The following screenshot shows the cost estimation by using the calculator:
+The following screenshot shows an example cost estimate in the pricing calculator:
-As you add new resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
+As you add resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
For more information, see [Azure Machine Learning pricing](https://azure.microsoft.com/pricing/details/machine-learning?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ## Understand the full billing model for Azure Machine Learning
-Azure Machine Learning runs on Azure infrastructure that accrues costs along with Azure Machine Learning when you deploy the new resource. It's important to understand that additional infrastructure might accrue cost. You need to manage that cost when you make changes to deployed resources.
-
+Azure Machine Learning runs on Azure infrastructure that accrues costs along with Azure Machine Learning when you deploy the new resource. It's important to understand that extra infrastructure might accrue cost. You need to manage that cost when you make changes to deployed resources.
### Costs that typically accrue with Azure Machine Learning When you create resources for an Azure Machine Learning workspace, resources for other Azure services are also created. They are:
-* [Azure Container Registry](https://azure.microsoft.com/pricing/details/container-registry?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) Basic account
-* [Azure Block Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) (general purpose v1)
-* [Key Vault](https://azure.microsoft.com/pricing/details/key-vault?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
-* [Application Insights](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+* [Azure Container Registry](https://azure.microsoft.com/pricing/details/container-registry?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) basic account
+* [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) (general purpose v1)
+* [Azure Key Vault](https://azure.microsoft.com/pricing/details/key-vault?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+* [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
-When you create a [compute instance](concept-compute-instance.md), the VM stays on so it's available for your work.
-* [Enable idle shutdown](how-to-create-compute-instance.md#configure-idle-shutdown) to save on cost when the VM has been idle for a specified time period.
-* Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance to save cost when you aren't planning to use it.
+When you create a [compute instance](concept-compute-instance.md), the virtual machine (VM) stays on so it's available for your work.
+* Enable [idle shutdown](how-to-create-compute-instance.md#configure-idle-shutdown) to reduce costs when the VM is idle for a specified time period.
+* Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance to reduce costs when you aren't planning to use it.
-
### Costs might accrue before resource deletion
-Before you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you aren't actively working in the workspace. If you're planning on returning to your Azure Machine Learning workspace at a later time, these resources may continue to accrue costs.
+Before you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you aren't actively working in the workspace. If you plan on returning to your Azure Machine Learning workspace at a later time, these resources might continue to accrue costs.
* VMs * Load Balancer * Azure Virtual Network * Bandwidth
-Each VM is billed per hour it's running. Cost depends on VM specifications. VMs that are running but not actively working on a dataset will still be charged via the load balancer. For each compute instance, one load balancer is billed per day. Every 50 nodes of a compute cluster have one standard load balancer billed. Each load balancer is billed around $0.33/day. To avoid load balancer costs on stopped compute instances and compute clusters, delete the compute resource.
+Each VM is billed per hour that it runs. Cost depends on VM specifications. VMs that run but don't actively work on a dataset are still charged via the load balancer. For each compute instance, one load balancer is billed per day. Every 50 nodes of a compute cluster have one standard load balancer billed. Each load balancer is billed around $0.33/day. To avoid load balancer costs on stopped compute instances and compute clusters, delete the compute resource.
-Compute instances also incur P10 disk costs even in stopped state. This is because any user content saved there's persisted across the stopped state similar to Azure VMs. We're working on making the OS disk size/ type configurable to better control costs. For Azure Virtual Networks, one virtual network is billed per subscription and per region. Virtual networks can't span regions or subscriptions. Setting up private endpoints in a virtual network may also incur charges. If your virtual network uses an Azure Firewall, this may also incur charges. Bandwidth is charged by usage; the more data transferred, the more you're charged.
+Compute instances also incur P10 disk costs even in stopped state because any user content saved there persists across the stopped state similar to Azure VMs. We're working on making the OS disk size/ type configurable to better control costs. For Azure Virtual Networks, one virtual network is billed per subscription and per region. Virtual networks can't span regions or subscriptions. Setting up private endpoints in a virtual network might also incur charges. If your virtual network uses an Azure Firewall, this might also incur charges. Bandwidth charges reflect usage; the more data transferred, the greater the charge.
> [!TIP]
-> Using an Azure Machine Learning managed virtual network is free. However some features of the managed network rely on Azure Private Link (for private endpoints) and Azure Firewall (for FQDN rules) and will incur charges. For more information, see [Managed virtual network isolation](how-to-managed-network.md#pricing).
+> Using an Azure Machine Learning managed virtual network is free. However, some features of the managed network rely on Azure Private Link (for private endpoints) and Azure Firewall (for FQDN rules), which incur charges. For more information, see [Managed virtual network isolation](how-to-managed-network.md#pricing).
### Costs might accrue after resource deletion After you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them. * Azure Container Registry
-* Azure Block Blob Storage
+* Azure Blob Storage
* Key Vault * Application Insights
from azure.ai.ml.entities import Workspace
ml_client.workspaces.begin_delete(name=ws.name, delete_dependent_resources=True) ```
-If you create Azure Kubernetes Service (AKS) in your workspace, or if you attach any compute resources to your workspace you must delete them separately in the [Azure portal](https://portal.azure.com).
+If you create Azure Kubernetes Service (AKS) in your workspace, or if you attach any compute resources to your workspace, you must delete them separately in the [Azure portal](https://portal.azure.com).
-### Using Azure Prepayment credit with Azure Machine Learning
+### Use Azure Prepayment credit with Azure Machine Learning
-You can pay for Azure Machine Learning charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure Machine Learning charges by using your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for third-party products and services, including those from the Azure Marketplace.
## Review estimated costs in the Azure portal
For example, you might start with the following (modify for your service):
As you create compute resources for Azure Machine Learning, you see estimated costs.
-To create a *compute instance *and view the estimated price:
+To create a compute instance and view the estimated price:
-1. Sign into the [Azure Machine Learning studio](https://ml.azure.com)
+1. Sign into the [Azure Machine Learning studio](https://ml.azure.com).
1. On the left side, select **Compute**. 1. On the top toolbar, select **+New**.
-1. Review the estimated price shown in for each available virtual machine size.
+1. Review the estimated price shown for each available virtual machine size.
1. Finish creating the resource. - If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ## Monitor costs
-As you use Azure resources with Azure Machine Learning, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure Machine Learning use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+You incur costs to use Azure resources with Azure Machine Learning. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure Machine Learning use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
When you use cost analysis, you view Azure Machine Learning costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you create budgets, you can also easily see where they're exceeded.
To view Azure Machine Learning costs in cost analysis:
1. Sign in to the Azure portal. 2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
-3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Azure Machine Learning.
+3. By default, costs for services are shown in the first donut chart. Select the area in the chart labeled Azure Machine Learning.
-Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
-
+Actual monthly costs are shown when you initially open cost analysis. Here's an example that shows all monthly usage costs.
To narrow costs for a single service, like Azure Machine Learning, select **Add filter** and then select **Service name**. Then, select **virtual machines**.
-Here's an example showing costs for just Azure Machine Learning.
+Here's an example that shows costs for just Azure Machine Learning.
<!-- Note to Azure service writer: The image shows an example for Azure Storage. Replace the example image with one that shows costs for your service. --> In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and Azure Machine Learning costs by resource group are also shown. From here, you can explore costs on your own.+ ## Create budgets You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you extra money. For more about the filter options when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Export cost data
-You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you or others need to do more data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Other ways to manage and reduce costs for Azure Machine Learning Use the following tips to help you manage and optimize your compute resource costs. -- Configure your training clusters for autoscaling-- Set quotas on your subscription and workspaces-- Set termination policies on your training job-- Use low-priority virtual machines (VM)-- Schedule compute instances to shut down and start up automatically-- Use an Azure Reserved VM Instance-- Train locally-- Parallelize training-- Set data retention and deletion policies-- Deploy resources to the same region
+- Configure your training clusters for autoscaling.
+- Set quotas on your subscription and workspaces.
+- Set termination policies on your training job.
+- Use low-priority virtual machines.
+- Schedule compute instances to shut down and start up automatically.
+- Use an Azure Reserved VM instance.
+- Train locally.
+- Parallelize training.
+- Set data retention and deletion policies.
+- Deploy resources to the same region.
- Delete instances and clusters if you don't plan on using them soon.
-For more information, see [manage and optimize costs in Azure Machine Learning](how-to-manage-optimize-cost.md).
+For more information, see [Manage and optimize Azure Machine Learning costs](how-to-manage-optimize-cost.md).
## Next steps -- [Manage and optimize costs in Azure Machine Learning](how-to-manage-optimize-cost.md).
+- [Manage and optimize Azure Machine Learning costs](how-to-manage-optimize-cost.md)
- [Manage budgets, costs, and quota for Azure Machine Learning at organizational scale](/azure/cloud-adoption-framework/ready/azure-best-practices/optimize-ai-machine-learning-cost)-- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Learn [how to optimize your cloud investment with Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- [Quickstart: Start using Cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- [Identify anomalies and unexpected changes in cost](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Previously updated : 11/04/2022- Last updated : 04/08/2024+
+reviewer: msakande
Prebuilt Docker container images for inference are used when deploying a model w
## Why should I use prebuilt images?
-* Reduces model deployment latency.
-* Improves model deployment success rate.
-* Avoid unnecessary image build during model deployment.
-* Only have required dependencies and access right in the image/container. 
+* Reduces model deployment latency
+* Improves model deployment success rate
+* Avoids unnecessary image build during model deployment
+* Includes only the required dependencies and access right in the image/container
## List of prebuilt Docker images for inference > [!IMPORTANT]
-> The list provided below includes only **currently supported** inference docker images by Azure Machine Learning.
+> The list provided in the following table includes only the inference Docker images that Azure Machine Learning **currently supports**.
-* All the docker images run as non-root user.
-* We recommend using `latest` tag for docker images. Prebuilt docker images for inference are published to Microsoft container registry (MCR), to query list of tags available, follow [instructions on the GitHub repository](https://github.com/microsoft/ContainerRegistry#browsing-mcr-content).
-* If you want to use a specific tag for any inference docker image, we support from `latest` to the tag that is *6 months* old from the `latest`.
+* All the Docker images run as non-root user.
+* We recommend using the `latest` tag for Docker images. Prebuilt Docker images for inference are published to the Microsoft container registry (MCR). For information on how to query the list of tags available, see the [MCR GitHub repository](https://github.com/microsoft/ContainerRegistry#browsing-mcr-content).
+* If you want to use a specific tag for any inference Docker image, Azure Machine Learning supports tags that range from `latest` to *six months* older than `latest`.
**Inference minimal base images**
NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu20.04-py38-cuda11.6.2-g
NA | CPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu22.04-py39-cpu-inference:latest` NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu22.04-py39-cuda11.8-gpu-inference:latest`
-## How to use inference prebuilt docker images?
-[Check examples in the Azure machine learning GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container)
-
-## Next steps
+## Related content
+* [GitHub examples of how to use inference prebuilt Docker images](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container)
* [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
-* [Learn more about custom containers](how-to-deploy-custom-container.md)
-* [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online)
+* [Use a custom container to deploy a model to an online endpoint](how-to-deploy-custom-container.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 04/14/2023 Last updated : 04/08/2024 ms.devlang: azurecli monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
The following table summarizes the inputs and outputs for batch deployments:
| Deployment type | Input's number | Supported input's types | Output's number | Supported output's types | |--|--|--|--|--|
-| [Model deployment](concept-endpoints-batch.md#model-deployments) | 1 | [Data inputs](#data-inputs) | 1 | [Data outputs](#data-outputs) |
+| [Model deployment](concept-endpoints-batch.md#model-deployment) | 1 | [Data inputs](#data-inputs) | 1 | [Data outputs](#data-outputs) |
| [Pipeline component deployment](concept-endpoints-batch.md#pipeline-component-deployment) | [0..N] | [Data inputs](#data-inputs) and [literal inputs](#literal-inputs) | [0..N] | [Data outputs](#data-outputs) | > [!TIP]
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
Previously updated : 04/25/2023 Last updated : 04/08/2024
Create a workspace configuration file in one of the following methods:
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] ```python
- #import required libraries
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
-
- #Enter details of your Azure Machine Learning workspace
- subscription_id = '<SUBSCRIPTION_ID>'
- resource_group = '<RESOURCE_GROUP>'
- workspace = '<AZUREML_WORKSPACE_NAME>'
-
- #connect to the workspace
- ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ #import required libraries
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+
+ #Enter details of your Azure Machine Learning workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace = '<AZUREML_WORKSPACE_NAME>'
+
+ #connect to the workspace
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
``` ## Local computer or remote VM environment
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
The following table describes the key attributes of a deployment:
| Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). | | Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [virtual machine quota allocation for deployments](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment). |
-> [!NOTE]
+> [!WARNING]
> - The model and container image (as defined in Environment) can be referenced again at any time by the deployment when the instances behind the deployment go through security patches and/or other recovery operations. If you used a registered model or container image in Azure Container Registry for deployment and removed the model or the container image, the deployments relying on these assets can fail when reimaging happens. If you removed the model or the container image, ensure the dependent deployments are re-created or updated with alternative model or container image. > - The container registry that the environment refers to can be private only if the endpoint identity has the permission to access it via Microsoft Entra authentication and Azure RBAC. For the same reason, private Docker registries other than Azure Container Registry are not supported.
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Use the following steps to enable access to data stored in Azure Blob and File s
For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role.
+1. Grant **your Azure user identity** the **Storage Blob Data reader** role for the Azure storage account. The studio uses your identity to access data to blob storage, even if the workspace managed identity has the Reader role.
+
+ For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role.
+ 1. **Grant the workspace managed identity the Reader role for storage private endpoints**. If your storage service uses a private endpoint, grant the workspace's managed identity *Reader* access to the private endpoint. The workspace's managed identity in Microsoft Entra ID has the same name as your Azure Machine Learning workspace. A private endpoint is necessary for both blob and file storage types. > [!TIP]
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 04/14/2023 Last updated : 04/08/2024 - tracking-python - references_regions
ml_client.begin_create_or_update(entity=compute)
1. Select the **Compute** page from the left navigation bar. 1. Select the **+ New** from the navigation bar of compute instance or compute cluster. 1. Configure the VM size and configuration you need, then select **Next**.
-1. From the **Advanced Settings**, Select **Enable virtual network**, your virtual network and subnet, and finally select the **No Public IP** option under the VNet/subnet section.
+1. From **Security**, select **Enable virtual network**, your virtual network and subnet, and finally select the **No Public IP** option under the VNet/subnet section.
:::image type="content" source="./media/how-to-secure-training-vnet/no-public-ip.png" alt-text="A screenshot of how to configure no public IP for compute instance and compute cluster." lightbox="./media/how-to-secure-training-vnet/no-public-ip.png":::
ml_client.begin_create_or_update(entity=compute)
1. Select the **Compute** page from the left navigation bar. 1. Select the **+ New** from the navigation bar of compute instance or compute cluster. 1. Configure the VM size and configuration you need, then select **Next**.
-1. From the **Advanced Settings**, Select **Enable virtual network** and then select your virtual network and subnet.
+1. From **Security**, select **Enable virtual network** and then select your virtual network and subnet.
:::image type="content" source="./media/how-to-secure-training-vnet/with-public-ip.png" alt-text="A screenshot of how to configure a compute instance/cluster in a VNet with a public IP." lightbox="./media/how-to-secure-training-vnet/with-public-ip.png":::
Allow Azure Machine Learning to communicate with the SSH port on the VM or clust
1. In the __Source service tag__ drop-down list, select __AzureMachineLearning__.
- ![Inbound rules for doing experimentation on a VM or HDInsight cluster within a virtual network](./media/how-to-enable-virtual-network/experimentation-virtual-network-inbound.png)
+ :::image type="content" source="./media/how-to-secure-training-vnet/experimentation-virtual-network-inbound.png" alt-text="A screenshot of inbound rules for doing experimentation on a VM or HDInsight cluster within a virtual network." lightbox="./media/how-to-secure-training-vnet/experimentation-virtual-network-inbound.png":::
1. In the __Source port ranges__ drop-down list, select __*__.
machine-learning Concept Llmops Maturity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-llmops-maturity.md
Large Language Model Operations, or **LLMOps**, describes the operational practi
Use the descriptions below to find your *LLMOps Maturity Model* ranking level. These levels provide a general understanding and practical application level of your organization. The guidelines provide you with helpful links to expand your LLMOps knowledge base.
+Or use this [LLMOps Maturity Model Assessment](/assessments/e14e1e9f-d339-4d7e-b2bb-24f056cf08b6/) to determine your organization's current LLMOps maturity level. The questionnaire is designed to help you understand your organization's current capabilities and identify areas for improvement.
+
+Your results from the assessment corresponds to a *LLMOps Maturity Model* ranking level, providing a general understanding and practical application level of your organization. These guidelines provide you with helpful links to expand your LLMOps knowledge base
+ ## <a name="level1"></a>Level 1 - initial
+> [!TIP]
+> Score from [LLMOps Maturity Model Assessment](/assessments/e14e1e9f-d339-4d7e-b2bb-24f056cf08b6/): initial (0-9).
+ **Description:** Your organization is at the initial foundational stage of LLMOps maturity. You're exploring the capabilities of LLMs but haven't yet developed structured practices or systematic approaches. Begin by familiarizing yourself with different LLM APIs and their capabilities. Next, start experimenting with structured prompt design and basic prompt engineering. Review ***Microsoft Learning*** articles as a starting point. Taking what youΓÇÖve learned, discover how to introduce basic metrics for LLM application performance evaluation.
To better understand LLMOps, consider available MS Learning courses and workshop
## <a name="level2"></a> Level 2 - defined
+> [!TIP]
+> Score from [LLMOps Maturity Model Assessment](/assessments/e14e1e9f-d339-4d7e-b2bb-24f056cf08b6/): maturing (9-14).
+ **Description:** Your organization has started to systematize LLM operations, with a focus on structured development and experimentation. However, there's room for more sophisticated integration and optimization. To improve your capabilities and skills, learn how to develop more complex prompts and begin integrating them effectively into applications. During this journey, youΓÇÖll want to implement a systematic approach for LLM application deployment, possibly exploring CI/CD integration. Once you understand the core, you can begin employing more advanced evaluation metrics like groundedness, relevance, and similarity. Ultimately, youΓÇÖll want to focus on content safety and ethical considerations in LLM usage.
To improve your capabilities and skills, learn how to develop more complex promp
## <a name="level3"></a> Level 3 - managed
+> [!TIP]
+> Score from [LLMOps Maturity Model Assessment](/assessments/e14e1e9f-d339-4d7e-b2bb-24f056cf08b6/): maturing (15-19).
+ **Description:** Your organization is managing advanced LLM workflows with proactive monitoring and structured deployment strategies. You're close to achieving operational excellence. To expand your base knowledge, focus on continuous improvement and innovation in your LLM applications. As you progress, you can enhance your monitoring strategies with predictive analytics and comprehensive content safety measures. Learn to optimize and fine-tune your LLM applications for specific requirements. Ultimately, you want to strengthen your asset management strategies through advanced version control and rollback capabilities.
To expand your base knowledge, focus on continuous improvement and innovation in
## <a name="level4"></a> Level 4 - optimized
+> [!TIP]
+> Score from [LLMOps Maturity Model Assessment](/assessments/e14e1e9f-d339-4d7e-b2bb-24f056cf08b6/): optimized (19-28).
+ **Description:** Your organization demonstrates operational excellence in LLMOps. You have a sophisticated approach to LLM application development, deployment, and monitoring. As LLMs evolve, youΓÇÖll want to maintain your cutting-edge position by staying updated with the latest LLM advancements. Continuously evaluate the alignment of your LLM strategies with evolving business objectives. Ensure that you foster a culture of innovation and continuous learning within your team. Last, but not least, share your knowledge and best practices with the wider community to establish thought leadership in the field.
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
Automatic is the default option for a runtime. You can start an automatic runtim
- If you choose serverless compute, you can set following settings: - Customize the VM size that the runtime uses. - Customize the idle time, which saves code by deleting the runtime automatically if it isn't in use.
- - Set the user-assigned managed identity. The automatic runtime uses this identity to pull a base image and install packages. Make sure that the user-assigned managed identity has Azure Container Registry `acrpull` permission. If you don't set this identity, we use the user identity by default. [Learn more about how to create and update user-assigned identities for a workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
+ - Set the user-assigned managed identity. The automatic runtime uses this identity to pull a base image, auth with connection and install packages. Make sure that the user-assigned managed identity has Azure Container Registry `acrpull` permission. If you don't set this identity, we use the user identity by default.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow with advanced settings using serverless compute for starting an automatic runtime on a flow page." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow with advanced settings using serverless compute for starting an automatic runtime on a flow page." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
+
+ - You can use following CLI command to assign UAI to workspace. [Learn more about how to create and update user-assigned identities for a workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
++
+ ```azurecli
+ az ml workspace update -f workspace_update_with_multiple_UAIs.yml --subscription <subscription ID> --resource-group <resource group name> --name <workspace name>
+ ```
+
+ Where the contents of *workspace_update_with_multiple_UAIs.yml* are as follows:
+
+ ```yaml
+ identity:
+ type: system_assigned, user_assigned
+ user_assigned_identities:
+ '/subscriptions/<subscription_id>/resourcegroups/<resource_group_name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<uai_name>': {}
+ '<UAI resource ID 2>': {}
+ primary_user_assigned_identity: <one of the UAI resource IDs in the above list>
+ ```
> [!TIP] > The following [Azure RBAC role assignments](../../role-based-access-control/role-assignments.md) are required on your user-assigned managed identity for your Azure Machine Learning workspace to access data on the workspace-associated resources.
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
identity:
- resource_id: user_identity_ARM_id_place_holder ```
-Besides, you also need to specify the `Clicn ID` of the user-assigned identity under `environment_variables` the `deployment.yaml` as following. You can find the `Clicn ID` in the `Overview` of the managed identity in Azure portal.
+Besides, you also need to specify the `Client ID` of the user-assigned identity under `environment_variables` the `deployment.yaml` as following. You can find the `Client ID` in the `Overview` of the managed identity in Azure portal.
```yaml environment_variables:
- AZURE_CLIENT_ID: <cliend_id_of_your_user_assigned_identity>
+ AZURE_CLIENT_ID: <client_id_of_your_user_assigned_identity>
``` > [!IMPORTANT]
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `description` | string | Description of the deployment. | | | | `tags` | object | Dictionary of tags for the deployment. | | | | `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | |
-| `type` | string | **Required.** Type of the bath deployment. Use `model` for [model deployments](concept-endpoints-batch.md#model-deployments) and `pipeline` for [pipeline component deployments](concept-endpoints-batch.md#pipeline-component-deployment). <br><br>**New in version 1.7**. | `model`, `pipeline` | `model` |
+| `type` | string | **Required.** Type of the bath deployment. Use `model` for [model deployments](concept-endpoints-batch.md#model-deployment) and `pipeline` for [pipeline component deployments](concept-endpoints-batch.md#pipeline-component-deployment). <br><br>**New in version 1.7**. | `model`, `pipeline` | `model` |
| `settings` | object | Configuration of the deployment. See specific YAML reference for model and pipeline component for allowed values. <br><br>**New in version 1.7**. | | | > [!TIP]
machine-learning Tutorial Feature Store Domain Specific Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-feature-store-domain-specific-language.md
+
+ Title: "Tutorial 7: Develop a feature set using Domain Specific Language (preview)"
+
+description: This is part 7 of the managed feature store tutorial series.
+++++++ Last updated : 03/29/2024++
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial 7: Develop a feature set using Domain Specific Language (preview)
++
+An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and proceeds to the inference steps that look up feature data. For more information about feature stores, visit [feature store concepts](./concept-what-is-managed-feature-store.md).
+
+This tutorial describes how to develop a feature set using Domain Specific Language. The Domain Specific Language (DSL) for the managed feature store provides a simple and user-friendly way to define the most commonly used feature aggregations. With the feature store SDK, users can perform the most commonly used aggregations with a DSL *expression*. Aggregations that use the DSL *expression* ensure consistent results, compared with user-defined functions (UDFs). Additionally, those aggregations avoid the overhead of writing UDFs.
+
+This Tutorial shows how to
+
+> [!div class="checklist"]
+> * Create a new, minimal feature store workspace
+> * Locally develop and test a feature, through use of Domain Specific Language (DSL)
+> * Develop a feature set through use of User Defined Functions (UDFs) that perform the same transformations as a feature set created with DSL
+> * Compare the results of the feature sets created with DSL, and feature sets created with UDFs
+> * Register a feature store entity with the feature store
+> * Register the feature set created using DSL with the feature store
+> * Generate sample training data using the created features
+
+## Prerequisites
+
+> [!NOTE]
+> This tutorial uses an Azure Machine Learning notebook with **Serverless Spark Compute**.
+
+Before you proceed with this tutorial, make sure that you cover these prerequisites:
+
+1. An Azure Machine Learning workspace. If you don't have one, visit [Quickstart: Create workspace resources](./quickstart-create-resources.md?view-azureml-api-2) to learn how to create one.
+1. To perform the steps in this tutorial, your user account needs either the **Owner** or **Contributor** role to the resource group where the feature store will be created.
+
+## Set up
+
+ This tutorial relies on the Python feature store core SDK (`azureml-featurestore`). This SDK is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
+
+ You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `conda.yml` file covers them.
+
+ To prepare the notebook environment for development:
+
+ 1. Clone the [examples repository - (azureml-examples)](https://github.com/azure/azureml-examples) to your local machine with this command:
+
+ `git clone --depth 1 https://github.com/Azure/azureml-examples`
+
+ You can also download a zip file from the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local machine.
+
+ 1. Upload the feature store samples directory to project workspace
+ 1. Open Azure Machine Learning studio UI of your Azure Machine Learning workspace
+ 1. Select **Notebooks** in left navigation panel
+ 1. Select your user name in the directory listing
+ 1. Select the ellipses (**...**), and then select **Upload folder**
+ 1. Select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`
+
+ 1. Run the tutorial
+
+ * Option 1: Create a new notebook, and execute the instructions in this document, step by step
+ * Option 2: Open existing notebook `featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb`. You can keep this document open, and refer to it for more explanation and documentation links
+
+ 1. To configure the notebook environment, you must upload the `conda.yml` file
+
+ 1. Select **Notebooks** on the left navigation panel, and then select the **Files** tab
+ 1. Navigate to the `env` directory (select **Users** > *your_user_name* > **featurestore_sample** > **project** > **env**), and then select the `conda.yml` file
+ 1. Select **Download**
+ 1. Select **Serverless Spark Compute** in the top navigation **Compute** dropdown. This operation might take one to two minutes. Wait for the status bar in the top to display the **Configure session** link
+ 1. Select **Configure session** in the top status bar
+ 1. Select **Settings**
+ 1. Select **Apache Spark version** as `Spark version 3.3`
+ 1. Optionally, increase the **Session timeout** (idle time) if you want to avoid frequent restarts of the serverless Spark session
+ 1. Under **Configuration settings**, define *Property* `spark.jars.packages` and *Value* `com.microsoft.azure:azureml-fs-scala-impl:1.0.4`
+ :::image type="content" source="./media/tutorial-feature-store-domain-specific-language/dsl-spark-jars-property.png" lightbox="./media/tutorial-feature-store-domain-specific-language/dsl-spark-jars-property.png" alt-text="This screenshot shows the Spark session property for a package that contains the jar file used by managed feature store domain-specific language.":::
+ 1. Select **Python packages**
+ 1. Select **Upload conda file**
+ 1. Select the `conda.yml` you downloaded on your local device
+ 1. Select **Apply**
+
+ > [!TIP]
+ > Except for this specific step, you must run all the other steps every time you start a new spark session, or after session time out.
+
+ 1. This code cell sets up the root directory for the samples and starts the Spark session. It needs about 10 minutes to install all the dependencies and start the Spark session:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=setup-root-dir)]
+
+## Provision the necessary resources
+
+ 1. Create a minimal feature store:
+
+ Create a feature store in a region of your choice, from the Azure Machine Learning studio UI or with Azure Machine Learning Python SDK code.
+
+ * Option 1: Create feature store from the Azure Machine Learning studio UI
+
+ 1. Navigate to the feature store UI [landing page](https://ml.azure.com/featureStores)
+ 1. Select **+ Create**
+ 1. The **Basics** tab appears
+ 1. Choose a **Name** for your feature store
+ 1. Select the **Subscription**
+ 1. Select the **Resource group**
+ 1. Select the **Region**
+ 1. Select **Apache Spark version** 3.3, and then select **Next**
+ 1. The **Materialization** tab appears
+ 1. Toggle **Enable materialization**
+ 1. Select **Subscription** and **User identity** to **Assign user managed identity**
+ 1. Select **From Azure subscription** under **Offline store**
+ 1. Select **Store name** and **Azure Data Lake Gen2 file system name**, then select **Next**
+ 1. On the **Review** tab, verify the displayed information and then select **Create**
+
+ * Option 2: Create a feature store using the Python SDK
+ Provide `featurestore_name`, `featurestore_resource_group_name`, and `featurestore_subscription_id` values, and execute this cell to create a minimal feature store:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-min-fs)]
+
+ 1. Assign permissions to your user identity on the offline store:
+
+ If feature data is materialized, then you must assign the **Storage Blob Data Reader** role to your user identity to read feature data from offline materialization store.
+ 1. Open the [Azure ML global landing page](https://ml.azure.com/home)
+ 1. Select **Feature stores** in the left navigation
+ 1. You'll see the list of feature stores that you have access to. Select the feature store that you created above
+ 1. Select the storage account link under **Account name** on the **Offline materialization store** card, to navigate to the ADLS Gen2 storage account for the offline store
+ :::image type="content" source="./media/tutorial-feature-store-domain-specific-language/offline-store-link.png" lightbox="./media/tutorial-feature-store-domain-specific-language/offline-store-link.png" alt-text="This screenshot shows the storage account link for the offline materialization store on the feature store UI.":::
+ 1. Visit [this resource](../role-based-access-control/role-assignments-portal.md) for more information about how to assign the **Storage Blob Data Reader** role to your user identity on the ADLS Gen2 storage account for offline store. Allow some time for permissions to propagate.
+
+## Available DSL expressions and benchmarks
+
+ Currently, these aggregation expressions are supported:
+ - Average - `avg`
+ - Sum - `sum`
+ - Count - `count`
+ - Min - `min`
+ - Max - `max`
+
+ This table provides benchmarks that compare performance of aggregations that use DSL *expression* with the aggregations that use UDF, using a representative dataset of size 23.5 GB with the following attributes:
+ - `numberOfSourceRows`: 348,244,374
+ - `numberOfOfflineMaterializedRows`: 227,361,061
+
+ |Function|*Expression*|UDF execution time|DSL execution time|
+ |--||||
+ |`get_offline_features(use_materialized_store=false)`|`sum`, `avg`, `count`|~2 hours|< 5 minutes|
+ |`get_offline_features(use_materialized_store=true)`|`sum`, `avg`, `count`|~1.5 hours|< 5 minutes|
+ |`materialize()`|`sum`, `avg`, `count`|~1 hour|< 15 minutes|
+
+ > [!NOTE]
+ > The `min` and `max` DSL expressions provide no performance improvement over UDFs. We recommend that you use UDFs for `min` and `max` transformations.
+
+## Create a feature set specification using DSL expressions
+
+ 1. Execute this code cell to create a feature set specification, using DSL expressions and parquet files as source data.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-dsl-parq-fset)]
+
+ 1. This code cell defines the start and end times for the feature window.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=define-feat-win)]
+
+ 1. This code cell uses `to_spark_dataframe()` to get a dataframe in the defined feature window from the above feature set specification defined using DSL expressions:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=sparkdf-dsl-parq)]
+
+ 1. Print some sample feature values from the feature set defined with DSL expressions:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-dsl-parq)]
+
+## Create a feature set specification using UDF
+
+ 1. Create a feature set specification that uses UDF to perform the same transformations:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-udf-parq-fset)]
+
+ This transformation code shows that the UDF defines the same transformations as the DSL expressions:
+
+ ```python
+ class TransactionFeatureTransformer(Transformer):
+ def _transform(self, df: DataFrame) -> DataFrame:
+ days = lambda i: i * 86400
+ w_3d = (
+ Window.partitionBy("accountID")
+ .orderBy(F.col("timestamp").cast("long"))
+ .rangeBetween(-days(3), 0)
+ )
+ w_7d = (
+ Window.partitionBy("accountID")
+ .orderBy(F.col("timestamp").cast("long"))
+ .rangeBetween(-days(7), 0)
+ )
+ res = (
+ df.withColumn("transaction_7d_count", F.count("transactionID").over(w_7d))
+ .withColumn(
+ "transaction_amount_7d_sum", F.sum("transactionAmount").over(w_7d)
+ )
+ .withColumn(
+ "transaction_amount_7d_avg", F.avg("transactionAmount").over(w_7d)
+ )
+ .withColumn("transaction_3d_count", F.count("transactionID").over(w_3d))
+ .withColumn(
+ "transaction_amount_3d_sum", F.sum("transactionAmount").over(w_3d)
+ )
+ .withColumn(
+ "transaction_amount_3d_avg", F.avg("transactionAmount").over(w_3d)
+ )
+ .select(
+ "accountID",
+ "timestamp",
+ "transaction_3d_count",
+ "transaction_amount_3d_sum",
+ "transaction_amount_3d_avg",
+ "transaction_7d_count",
+ "transaction_amount_7d_sum",
+ "transaction_amount_7d_avg",
+ )
+ )
+ return res
+
+ ```
+
+ 1. Use `to_spark_dataframe()` to get a dataframe from the above feature set specification, defined using UDF:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=sparkdf-udf-parq)]
+
+ 1. Compare the results and verify consistency between the results from the DSL expressions and the transformations performed with UDF. To verify, select one of the `accountID` values to compare the values in the two dataframes:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-dsl-acct)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-udf-acct)]
+
+## Export feature set specifications as YAML
+
+ To register the feature set specification with the feature store, it must be saved in a specific format. To review the generated `transactions-dsl` feature set specification, open this file from the file tree, to see the specification: `featurestore/featuresets/transactions-dsl/spec/FeaturesetSpec.yaml`
+
+ The feature set specification contains these elements:
+
+ 1. `source`: Reference to a storage resource; in this case, a parquet file in a blob storage
+ 1. `features`: List of features and their datatypes. If you provide transformation code, the code must return a dataframe that maps to the features and data types
+ 1. `index_columns`: The join keys required to access values from the feature set
+
+ For more information, read the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set specification YAML reference](./reference-yaml-featureset-spec.md) resources.
+
+ As an extra benefit of persisting the feature set specification, it can be source controlled.
+
+ 1. Execute this code cell to write YAML specification file for the feature set, using parquet data source and DSL expressions:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=dump-dsl-parq-fset-spec)]
+
+ 1. Execute this code cell to write a YAML specification file for the feature set, using UDF:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=dump-udf-parq-fset-spec)]
+
+## Initialize SDK clients
+
+ The following steps of this tutorial use two SDKs.
+
+ 1. Feature store CRUD SDK: The Azure Machine Learning (AzureML) SDK `MLClient` (package name `azure-ai-ml`), similar to the one used with Azure Machine Learning workspace. This SDK facilitates feature store CRUD operations
+
+ - Create
+ - Read
+ - Update
+ - Delete
+
+ for feature store and feature set entities, because feature store is implemented as a type of Azure Machine Learning workspace
+
+ 1. Feature store core SDK: This SDK (`azureml-featurestore`) facilitates feature set development and consumption:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=init-python-clients)]
+
+## Register `account` entity with the feature store
+
+ Create an account entity that has a join key `accountID` of `string` type:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-account-entity)]
+
+## Register the feature set with the feature store
+
+ 1. Register the `transactions-dsl` feature set (that uses DSL) with the feature store, with offline materialization enabled, using the exported feature set specification:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-dsl-trans-fset)]
+
+ 1. Materialize the feature set to persist the transformed feature data to the offline store:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=mater-dsl-trans-fset)]
+
+ 1. Execute this code cell to track the progress of the materialization job:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=track-mater-job)]
+
+ 1. Print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method used to retrieve the training/inference data also uses the materialization store by default:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=lookup-trans-dsl-fset)]
+
+## Generate a training dataframe using the registered feature set
+
+### Load observation data
+
+ Observation data is typically the core data used in training and inference steps. Then, the observation data is joined with the feature data, to create a complete training data resource. Observation data is the data captured during the time of the event. In this case, it has core transaction data including transaction ID, account ID, and transaction amount. Since this data is used for training, it also has the target variable appended (`is_fraud`).
+
+ 1. First, explore the observation data:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=load-obs-data)]
+
+ 1. Select features that would be part of the training data, and use the feature store SDK to generate the training data:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=select-features-dsl)]
+
+ 1. The `get_offline_features()` function appends the features to the observation data with a point-in-time join. Display the training dataframe obtained from the point-in-time join:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=get-offline-features-dsl)]
+
+### Generate a training dataframe from feature sets using DSL and UDF
+
+ 1. Register the `transactions-udf` feature set (that uses UDF) with the feature store, using the exported feature set specification. Enable offline materialization for this feature set while registering with the feature store:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-udf-trans-fset)]
+
+ 1. Select features from the feature sets (created using DSL and UDF) that you would like to become part of the training data, and use the feature store SDK to generate the training data:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=select-features-dsl-udf)]
+
+ 1. The function `get_offline_features()` appends the features to the observation data with a point-in-time join. Display the training dataframe obtained from the point-in-time join:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=get-offline-features-dsl-udf)]
+
+The features are appended to the training data with a point-in-time join. The generated training data can be used for subsequent training and batch inferencing steps.
+
+## Clean up
+
+The [fifth tutorial in the series](./tutorial-develop-feature-set-with-custom-source.md#clean-up) describes how to delete the resources.
+
+## Next steps
+
+* [Part 2: Experiment and train models using features](./tutorial-experiment-train-models-using-features.md)
+* [Part 3: Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md)
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-overview.md
Title: Migrate to Azure Machine Learning from ML Studio (classic)
-description: Learn how to migrate from ML Studio (classic) to Azure Machine Learning for a modernized data science platform.
+ Title: Migrate to Azure Machine Learning from Studio (classic)
+description: Learn how to migrate from Machine Learning Studio (classic) to Azure Machine Learning for a modernized data science platform.
Previously updated : 03/11/2024 Last updated : 04/02/2024
-# Migrate to Azure Machine Learning from ML Studio (classic)
+# Migrate to Azure Machine Learning from Studio (classic)
> [!IMPORTANT]
-> Support for Machine Learning Studio (classic) will end on 31 August 2024. We recommend that you transition to [Azure Machine Learning](../overview-what-is-azure-machine-learning.md) by that date.
+> Support for Machine Learning Studio (classic) ends on 31 August 2024. We recommend that you transition to [Azure Machine Learning](../overview-what-is-azure-machine-learning.md) by that date.
>
-> After December 2021, you can no longer create new Machine Learning Studio (classic) resources. Through 31 August 2024, you can continue to use existing Machine Learning Studio (classic) resources.
+> After December 2021, you can no longer create new Studio (classic) resources. Through 31 August 2024, you can continue to use existing Studio (classic) resources.
>
-> ML Studio (classic) documentation is being retired and might not be updated in the future.
+> Studio (classic) documentation is being retired and might not be updated in the future.
Learn how to migrate from Machine Learning Studio (classic) to Azure Machine Learning. Azure Machine Learning provides a modernized data science platform that combines no-code and code-first approaches.
-This guide walks through a basic *lift and shift* migration. If you want to optimize an existing machine learning workflow, or modernize a machine learning platform, see the [Azure Machine Learning adoption framework](https://aka.ms/mlstudio-classic-migration-repo) for more resources, including digital survey tools, worksheets, and planning templates.
+This guide walks through a basic *lift and shift* migration. If you want to optimize an existing machine learning workflow, or modernize a machine learning platform, see the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo) for more resources, including digital survey tools, worksheets, and planning templates.
-Please work with your cloud solution architect on the migration.
+Please work with your cloud solution architect on the migration.
## Recommended approach
To migrate to Azure Machine Learning, we recommend the following approach:
> * Step 5: Clean up Studio (classic) assets > * Step 6: Review and expand scenarios
-## Step 1: Assess Azure Machine Learning
+### Step 1: Assess Azure Machine Learning
1. Learn about [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) and its benefits, costs, and architecture.
-1. Compare the capabilities of Azure Machine Learning and ML Studio (classic).
-
- >[!NOTE]
- > The **designer** feature in Azure Machine Learning provides a similar drag-and-drop experience to ML Studio (classic). However, Azure Machine Learning also provides robust [code-first workflows](../concept-model-management-and-deployment.md) as an alternative. This migration series focuses on the designer, since it's most similar to the Studio (classic) experience.
+1. Compare the capabilities of Azure Machine Learning and Studio (classic).
- The following table summarizes the key differences between ML Studio (classic) and Azure Machine Learning.
+ The following table summarizes the key differences.
- | Feature | ML Studio (classic) | Azure Machine Learning |
+ | Feature | Studio (classic) | Azure Machine Learning |
|| | | | Drag-and-drop interface | Classic experience | Updated experience: [Azure Machine Learning designer](../concept-designer.md)|
- | Code SDKs | Not supported | Fully integrated with [Azure Machine Learning Python](/python/api/overview/azure/ml/) and [R](https://github.com/Azure/azureml-sdk-for-r) SDKs |
+ | Code SDKs | Not supported | Fully integrated with Azure Machine Learning [Python](/python/api/overview/azure/ml/) and [R](https://github.com/Azure/azureml-sdk-for-r) SDKs |
| Experiment | Scalable (10-GB training data limit) | Scale with compute target |
- | Training compute targets | Proprietary compute target, CPU support only | Wide range of customizable [training compute targets](../concept-compute-target.md#training-compute-targets). Includes GPU and CPU support |
- | Deployment compute targets | Proprietary web service format, not customizable | Wide range of customizable [deployment compute targets](../concept-compute-target.md#compute-targets-for-inference). Includes GPU and CPU support |
- | ML pipeline | Not supported | Build flexible, modular [pipelines](../concept-ml-pipelines.md) to automate workflows |
+ | Training compute targets | Proprietary compute target, CPU support only | Wide range of customizable [training compute targets](../concept-compute-target.md#training-compute-targets); includes GPU and CPU support |
+ | Deployment compute targets | Proprietary web service format, not customizable | Wide range of customizable [deployment compute targets](../concept-compute-target.md#compute-targets-for-inference); includes GPU and CPU support |
+ | Machine learning pipeline | Not supported | Build flexible, modular [pipelines](../concept-ml-pipelines.md) to automate workflows |
| MLOps | Basic model management and deployment; CPU-only deployments | Entity versioning (model, data, workflows), workflow automation, integration with CICD tooling, CPU and GPU deployments, [and more](../concept-model-management-and-deployment.md) | | Model format | Proprietary format, Studio (classic) only | Multiple supported formats depending on training job type |
- | Automated model training and hyperparameter tuning | Not supported | [Supported](../concept-automated-ml.md). Code-first and no-code options. |
+ | Automated model training and hyperparameter tuning | Not supported | [Supported](../concept-automated-ml.md)<br><br> Code-first and no-code options |
| Data drift detection | Not supported | [Supported](../v1/how-to-monitor-datasets.md) | | Data labeling projects | Not supported | [Supported](../how-to-create-image-labeling-projects.md) | | Role-based access control (RBAC) | Only contributor and owner role | [Flexible role definition and RBAC control](../how-to-assign-roles.md) |
- | AI Gallery | [Supported](https://gallery.azure.ai) | Unsupported <br><br> Learn with [sample Python SDK notebooks](https://github.com/Azure/MachineLearningNotebooks) |
+ | AI Gallery | [Supported](https://gallery.azure.ai) | Not supported <br><br> Learn with [sample Python SDK notebooks](https://github.com/Azure/MachineLearningNotebooks) |
+
+ >[!NOTE]
+ > The **designer** feature in Azure Machine Learning provides a drag-and-drop experience that's similar to Studio (classic). However, Azure Machine Learning also provides robust [code-first workflows](../concept-model-management-and-deployment.md) as an alternative. This migration series focuses on the designer, since it's most similar to the Studio (classic) experience.
-1. Verify that your critical Studio (classic) modules are supported in Azure Machine Learning designer. For more information, see the following [Studio (classic) and designer component-mapping](#studio-classic-and-designer-component-mapping) table.
+1. Verify that your critical Studio (classic) modules are supported in Azure Machine Learning designer. For more information, see the [Studio (classic) and designer component-mapping](#studio-classic-and-designer-component-mapping) table.
1. Create an [Azure Machine Learning workspace](../quickstart-create-resources.md).
-## Step 2: Define a strategy and plan
+### Step 2: Define a strategy and plan
1. Define business justifications and expected outcomes.
Please work with your cloud solution architect to define your strategy.
For planning resources, including a planning doc template, see the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo).
-## Step 3: Rebuild your first model
+### Step 3: Rebuild your first model
After you define a strategy, migrate your first model.
-1. [Migrate datasets to Azure Machine Learning](migrate-register-dataset.md).
+1. [Migrate a dataset to Azure Machine Learning](migrate-register-dataset.md).
-1. Use the Azure Machine Learning designer to [rebuild experiments](migrate-rebuild-experiment.md).
+1. Use the Azure Machine Learning designer to [rebuild an experiment](migrate-rebuild-experiment.md).
-1. Use the Azure Machine Learning designer to [redeploy web services](migrate-rebuild-web-service.md).
+1. Use the Azure Machine Learning designer to [redeploy a web service](migrate-rebuild-web-service.md).
>[!NOTE]
- > This guidance is built on top of Azure Machine Learning v1 concepts and features. Azure Machine Learning has CLI v2 and Python SDK v2. We suggest that you rebuild your ML Studio (classic) models using v2 instead of v1. Start with [Azure Machine Learning v2](../concept-v2.md).
+ > This guidance is built on top of Azure Machine Learning v1 concepts and features. Azure Machine Learning has CLI v2 and Python SDK v2. We suggest that you rebuild your Studio (classic) models using v2 instead of v1. Start with [Azure Machine Learning v2](../concept-v2.md).
-## Step 4: Integrate client apps
+### Step 4: Integrate client apps
-Modify client applications that invoke ML Studio (classic) web services to use your new [Azure Machine Learning endpoints](migrate-rebuild-integrate-with-client-app.md).
+Modify client applications that invoke Studio (classic) web services to use your new [Azure Machine Learning endpoints](migrate-rebuild-integrate-with-client-app.md).
-## Step 5: Clean up Studio (classic) assets
+### Step 5: Clean up Studio (classic) assets
-To avoid extra charges, [clean up Studio (classic) assets](../classic/export-delete-personal-data-dsr.md). You might want to retain assets for fallback until you have validated Azure Machine Learning workloads.
+To avoid extra charges, [clean up Studio (classic) assets](../classic/export-delete-personal-data-dsr.md). You might want to retain assets for fallback until you've validated Azure Machine Learning workloads.
-## Step 6: Review and expand scenarios
+### Step 6: Review and expand scenarios
1. Review the model migration for best practices and validate workloads.
-1. Expand scenarios and migrate additional workloads to Azure Machine Learning.
+1. Expand scenarios and migrate more workloads to Azure Machine Learning.
## Studio (classic) and designer component-mapping
-Consult the following table to see which modules to use while rebuilding ML Studio (classic) experiments in the Azure Machine Learning designer.
+Consult the following table to see which modules to use while rebuilding Studio (classic) experiments in the Azure Machine Learning designer.
> [!IMPORTANT] > The designer implements modules through open-source Python packages rather than C# packages like Studio (classic). Because of this difference, the output of designer components might vary slightly from their Studio (classic) counterparts.
Consult the following table to see which modules to use while rebuilding ML Stud
|--|-|--| |Data input and output|- Enter data manually <br> - Export data <br> - Import data <br> - Load trained model <br> - Unpack zipped datasets|- Enter data manually <br> - Export data <br> - Import data| |Data format conversions|- Convert to CSV <br> - Convert to dataset <br> - Convert to ARFF <br> - Convert to SVMLight <br> - Convert to TSV|- Convert to CSV <br> - Convert to dataset|
-|Data transformation - Manipulation|- Add columns<br> - Add rows <br> - Apply SQL transformation <br> - Clean missing data <br> - Convert to indicator values <br> - Edit metadata <br> - Join data <br> - Remove duplicate rows <br> - Select columns in dataset <br> - Select columns transform <br> - SMOTE <br> - Group categorical values|- Add columns<br> - Add rows <br> - Apply SQL transformation <br> - Clean missing data <br> - Convert to indicator values <br> - Edit metadata <br> - Join data <br> - Remove duplicate rows <br> - Select columns in dataset <br> - Select columns transform <br> - SMOTE|
+|Data transformation ΓÇô Manipulation|- Add columns<br> - Add rows <br> - Apply SQL transformation <br> - Clean missing data <br> - Convert to indicator values <br> - Edit metadata <br> - Join data <br> - Remove duplicate rows <br> - Select columns in dataset <br> - Select columns transform <br> - SMOTE <br> - Group categorical values|- Add columns<br> - Add rows <br> - Apply SQL transformation <br> - Clean missing data <br> - Convert to indicator values <br> - Edit metadata <br> - Join data <br> - Remove duplicate rows <br> - Select columns in dataset <br> - Select columns transform <br> - SMOTE|
|Data transformation ΓÇô Scale and reduce |- Clip values <br> - Group data into bins <br> - Normalize data <br>- Principal component analysis |- Clip values <br> - Group data into bins <br> - Normalize data| |Data transformation ΓÇô Sample and split|- Partition and sample <br> - Split data|- Partition and sample <br> - Split data| |Data transformation ΓÇô Filter |- Apply filter <br> - FIR filter <br> - IIR filter <br> - Median filter <br> - Moving average filter <br> - Threshold filter <br> - User-defined filter| | |Data transformation ΓÇô Learning with counts |- Build counting transform <br> - Export count table <br> - Import count table <br> - Merge count transform<br> - Modify count table parameters| | |Feature selection |- Filter-based feature selection <br> - Fisher linear discriminant analysis <br> - Permutation feature importance |- Filter-based feature selection <br> - Permutation feature importance|
-| Model - Classification| - Multiclass decision forest <br> - Multiclass decision jungle <br> - Multiclass logistic regression <br>- Multiclass neural network <br>- One-vs-all multiclass <br>- Two-class averaged perceptron <br>- Two-class Bayes point machine <br>- Two-class boosted decision tree <br> - Two-class decision forest <br> - Two-class decision jungle <br> - Two-class locally-deep SVM <br> - Two-class logistic regression <br> - Two-class neural network <br> - Two-class support vector machine | - Multiclass decision forest <br> - Multiclass boost decision tree <br> - Multiclass logistic regression <br> - Multiclass neural network <br> - One-vs-all multiclass <br> - Two-class averaged perceptron <br> - Two-class boosted decision tree <br> - Two-class decision forest <br> - Two-class logistic regression <br> - Two-class neural network <br> - Two-class support vector machine |
-| Model - Clustering| - K-means clustering| - K-means clustering|
-| Model - Regression| - Bayesian linear regression <br> - Boosted decision tree regression <br> - Decision forest regression <br> - Fast forest quantile regression <br> - Linear regression <br> - Neural network regression <br> - Ordinal regression <br> - Poisson regression| - Boosted decision tree regression <br> - Decision forest regression <br> - Fast forest quantile regression <br> - Linear regression <br> - Neural network regression <br> - Poisson regression|
+| Model ΓÇô Classification| - Multiclass decision forest <br> - Multiclass decision jungle <br> - Multiclass logistic regression <br>- Multiclass neural network <br>- One-vs-all multiclass <br>- Two-class averaged perceptron <br>- Two-class Bayes point machine <br>- Two-class boosted decision tree <br> - Two-class decision forest <br> - Two-class decision jungle <br> - Two-class locally deep SVM <br> - Two-class logistic regression <br> - Two-class neural network <br> - Two-class support vector machine | - Multiclass decision forest <br> - Multiclass boost decision tree <br> - Multiclass logistic regression <br> - Multiclass neural network <br> - One-vs-all multiclass <br> - Two-class averaged perceptron <br> - Two-class boosted decision tree <br> - Two-class decision forest <br> - Two-class logistic regression <br> - Two-class neural network <br> - Two-class support vector machine |
+| Model ΓÇô Clustering| - K-means clustering| - K-means clustering|
+| Model ΓÇô Regression| - Bayesian linear regression <br> - Boosted decision tree regression <br> - Decision forest regression <br> - Fast forest quantile regression <br> - Linear regression <br> - Neural network regression <br> - Ordinal regression <br> - Poisson regression| - Boosted decision tree regression <br> - Decision forest regression <br> - Fast forest quantile regression <br> - Linear regression <br> - Neural network regression <br> - Poisson regression|
| Model ΓÇô Anomaly detection| - One-class SVM <br> - PCA-based anomaly detection | - PCA-based anomaly detection| | Machine Learning ΓÇô Evaluate | - Cross-validate model <br> - Evaluate model <br> - Evaluate recommender | - Cross-validate model <br> - Evaluate model <br> - Evaluate recommender| | Machine Learning ΓÇô Train| - Sweep clustering <br> - Train anomaly detection model <br> - Train clustering model <br> - Train matchbox recommender - <br> Train model <br> - Tune model hyperparameters| - Train anomaly detection model <br> - Train clustering model <br> - Train model <br> - Train PyTorch model <br> - Train SVD recommender <br> - Train wide and deep recommender <br> - Tune model hyperparameters|
Consult the following table to see which modules to use while rebuilding ML Stud
| Web service | - Input <br> - Output | - Input <br> - Output| | Computer vision| | - Apply image transformation <br> - Convert to image directory <br> - Init image transformation <br> - Split image directory <br> - DenseNet image classification <br> - ResNet image classification |
-For more information on how to use individual designer components, see the [designer component reference](../component-reference/component-reference.md).
+For more information on how to use individual designer components, see the [Algorithm & component reference](../component-reference/component-reference.md).
### What if a designer component is missing?
If your migration is blocked due to missing modules in the designer, contact us
## Example migration
-The following experiment migration highlights some of the differences between ML Studio (classic) and Azure Machine Learning.
+The following migration example highlights some of the differences between Studio (classic) and Azure Machine Learning.
### Datasets
-In ML Studio (classic), *datasets* were saved in your workspace and could only be used by Studio (classic).
+In Studio (classic), *datasets* were saved in your workspace and could only be used by Studio (classic).
-In Azure Machine Learning, *datasets* are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Secure data access](concept-data.md).
+In Azure Machine Learning, *datasets* are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Data in Azure Machine Learning](concept-data.md).
### Pipeline
-In ML Studio (classic), *experiments* contained the processing logic for your work. You created experiments with drag-and-drop modules.
+In Studio (classic), *experiments* contained the processing logic for your work. You created experiments with drag-and-drop modules.
In Azure Machine Learning, *pipelines* contain the processing logic for your work. You can create pipelines with either drag-and-drop modules or by writing code. ### Web service endpoints Studio (classic) used *REQUEST/RESPOND API* for real-time prediction and *BATCH EXECUTION API* for batch prediction or retraining. Azure Machine Learning uses *real-time endpoints* (managed endpoints) for real-time prediction and *pipeline endpoints* for batch prediction or retraining. ## Related content
-In this article, you learned the high-level requirements for migrating to Azure Machine Learning. For detailed steps, see the other articles in the ML Studio (classic) migration series:
+In this article, you learned the high-level requirements for migrating to Azure Machine Learning. For detailed steps, see the other articles in the Machine Learning Studio (classic) migration series:
-- [Migrate dataset](migrate-register-dataset.md)-- [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md)
+- [Migrate a Studio (classic) dataset](migrate-register-dataset.md)
+- [Rebuild a Studio (classic) experiment](migrate-rebuild-experiment.md)
- [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md)-- [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+- [Consume pipeline endpoints from client applications](migrate-rebuild-integrate-with-client-app.md).
- [Migrate Execute R Script modules](migrate-execute-r-script.md) For more migration resources, see the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo).
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
ms.
Previously updated : 03/13/2024 Last updated : 04/04/2024 # Azure Migrate appliance: Common questions
By default, the appliance and its installed agents are updated automatically. Th
Only the appliance and the appliance agents are updated by these automatic updates. The operating system is not updated by Azure Migrate automatic updates. Use Windows Updates to keep the operating system up to date.
+## How to troubleshoot Auto-update failures for Azure Migrate appliance?
+
+A modification was made recently to the MSI validation process, which could potentially impact the Migrate appliance auto-update process. The auto-update process might fail with the following error message:
++
+To fix this issue, follow these steps to ensure that your appliance can validate the digital signatures of the MSIs:
+
+1. Ensure that the MicrosoftΓÇÖs root certificate authority certificate is present in your applianceΓÇÖs certificate stores.
+ 1. Go to **Settings** and search for ΓÇÿcertificatesΓÇÖ.
+ 1. Select **Manage Computer Certificates**.
+
+ :::image type="content" source="./media/common-questions-appliance/settings-inline.png" alt-text="Screenshot of Windows settings." lightbox="./media/common-questions-appliance/settings-expanded.png":::
+
+ 1. In the certificate manager, you must see the entry for **Microsoft Root Certificate Authority 2011** and **Microsoft Code Signing PCA 2011** as shown in the following screenshots:
+
+ :::image type="content" source="./media/common-questions-appliance/certificate-1-inline.png" alt-text="Screenshot of certificate 1." lightbox="./media/common-questions-appliance/certificate-1-expanded.png":::
+
+ :::image type="content" source="./media/common-questions-appliance/certificate-2-inline.png" alt-text="Screenshot of certificate 2." lightbox="./media/common-questions-appliance/certificate-2-expanded.png":::
+
+ 1. If these two certificates are not present, proceed to download them from the following sources:
+ - https://download.microsoft.com/download/2/4/8/248D8A62-FCCD-475C-85E7-6ED59520FC0F/MicrosoftRootCertificateAuthority2011.cer
+ - https://www.microsoft.com/pkiops/certs/MicCodSigPCA2011_2011-07-08.crt
+ 1. install these certificates on the appliance machine.
+1. Check if there are any group policies on your machine that could be interfering with certificate validation:
+ 1. Go to Windows Start Menu > Run > gpedit.msc. <br>The **Local Group Policy Editor** window. Make sure that the **Network Retrieval** policies are defined as shown in the following screenshot:
+
+ :::image type="content" source="./media/common-questions-appliance/local-group-policy-editor-inline.png" alt-text="Screenshot of local group policy editor." lightbox="./media/common-questions-appliance/local-group-policy-editor-expanded.png":::
+
+1. Ensure that there are no internet access issues or firewall settings interfering with the certificate validation.
+
+**Verify Azure Migrate MSI Validation Readiness**
+
+1. To ensure that your appliance is ready to validate Azure Migrate MSIs, follow these steps:
+ 1. Download a sample MSI from [Microsoft Download Center](https://download.microsoft.com/download/9/b/8/9b8abdb7-a784-4a25-9da7-31ce4d80a0c5/MicrosoftAzureAutoUpdate.msi) on the appliance.
+ 1. Right-click on it and go to Digital Signatures tab.
+
+ :::image type="content" source="./media/common-questions-appliance/digital-sign-inline.png" alt-text="Screenshot of digital signature tab." lightbox="./media/common-questions-appliance/digital-sign-expanded.png":::
+
+ 1. Select Details and check that the Digital Signature Information for the certificate is OK as highlighted in the following screenshot:
+
+ :::image type="content" source="./media/common-questions-appliance/digital-sign-inline.png" alt-text="Screenshot of digital signature tab." lightbox="./media/common-questions-appliance/digital-sign-expanded.png":::
+ ## Can I check agent health? Yes. In the portal, go the **Agent health** page of the Azure Migrate: Discovery and assessment tool or the Migration and modernization tool. There, you can check the connection status between Azure and the discovery and assessment agents on the appliance.
mysql April 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/april-2024.md
All existing engine version server upgrades to 8.0.36 engine version.
To check your engine version, run `SELECT VERSION();` command at the MySQL prompt ## Features-- Support for Azure Defender for Azure DB for MySQL Flexible Server-
+### [Microsoft Defender for Cloud](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-databases-introduction)
+- Introducing Defender for Cloud support to simplify security management with threat protection from anomalous database activities in Azure Database for MySQL flexible server instances.
+
## Improvement - Expose old_alter_table for 8.0.x.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL fl
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## April 2024
+- **Microsoft Defender for Cloud supports Azure Database for MySQL flexible server (General Availability)**
+
+ WeΓÇÖre excited to announce the general availability of the Microsoft Defender for Cloud feature for Azure Database for MySQL flexible server in all service tiers. The Microsoft Defender Advanced Threat Protection feature simplifies security management of Azure Database for MySQL flexible server instances. It monitors the server for anomalous or suspicious databases activities to detect potential threats and provides security alerts for you to investigate and take appropriate action, allowing you to actively improve the security posture of your database without being a security expert. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-databases-introduction)
+- **Known Issues**
+
+ While attempting to enable the Microsoft Defender for Cloud feature for an Azure Database for MySQL flexible server, you may encounter the following error: ΓÇÿThe server <server_name> is not compatible with Advanced Threat Protection. Please contact Microsoft support to update the server to a supported version.ΓÇÖ This issue can occur on MySQL Flexible Servers that are still awaiting an internal update. It will be automatically resolved in the next internal update of your server. Alternatively, you can open a support ticket to expedite an immediate update.ΓÇ¥
## March 2024
nat-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-overview.md
A NAT gateway doesn't affect the network bandwidth of your compute resources. Le
### Traffic routes
-* NAT gateway replaces a subnetΓÇÖs [system default route](/azure/virtual-network/virtual-networks-udr-overview#default) to the internet when configured. When NAT gateway is attached to the subnet, all traffic within the 0.0.0.0/0 prefix routes to NAT gateway before connecting outbound to the internet.
+* The subnet has a [system default route](/azure/virtual-network/virtual-networks-udr-overview#default) that routes traffic with destination 0.0.0.0/0 to the internet automatically. Once NAT gateway is configured to the subnet, communication from the virtual machines existing in the subnet to the internet will prioritize using the public IP of the NAT gateway.
* You can override NAT gateway as a subnetΓÇÖs system default route to the internet with the creation of a custom user-defined route (UDR) for 0.0.0.0/0 traffic.
A NAT gateway doesn't affect the network bandwidth of your compute resources. Le
* Outbound connectivity follows this order of precedence among different routing and outbound connectivity methods:
- * Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-level public IP address on a virtual machine >> Load balancer outbound rules >> default system route to the internet.
+ * UDR with Virtual appliance / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-level public IP address on a virtual machine >> Load balancer outbound rules >> default system route to the internet.
### NAT gateway configurations
network-watcher Network Watcher Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-overview.md
Previously updated : 04/03/2023 Last updated : 04/08/2024 #CustomerIntent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
Network Watcher offers two traffic tools that help you log and visualize network
## Usage + quotas
-The **Usage + quotas** capability of Network Watcher provides a summary of how many of each network resource you've deployed in a subscription and region and what the limit is for the resource. For more information, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/network-watcher/toc.json#azure-resource-manager-virtual-networking-limits) to the number of network resources that you can create within an Azure subscription and region. This information is helpful when planning future resource deployments as you can't create more resources if you reach their limits within the subscription or region.
+The **Usage + quotas** capability of Network Watcher provides a summary of your deployed network resources within a subscription and region, including current usage and corresponding limits for each resource. For more information, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/network-watcher/toc.json#azure-resource-manager-virtual-networking-limits) to learn about the limits for each Azure network resource per region per subscription. This information is helpful when planning future resource deployments as you can't create more resources if you reach their limits within the subscription or region.
:::image type="content" source="./media/network-watcher-overview/subscription-limits.png" alt-text="Screenshot showing Networking resources usage and limits per subscription in the Azure portal.":::
network-watcher Vnet Flow Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md
Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
-In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md).
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using the [Azure portal](vnet-flow-logs-portal.md) or [PowerShell](vnet-flow-logs-powershell.md).
> [!IMPORTANT] > The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Previously updated : 03/28/2024 Last updated : 04/08/2024 #CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize network performance.
VNet flow logs can be enabled during the preview in the following regions:
## Related content -- To learn how to create, change, enable, disable, or delete VNet flow logs, see [Manage VNet flow logs using Azure PowerShell](vnet-flow-logs-powershell.md) or [Manage VNet flow logs using the Azure CLI](vnet-flow-logs-cli.md).
+- To learn how to create, change, enable, disable, or delete VNet flow logs, see the [Azure portal](vnet-flow-logs-portal.md), [PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md) guides.
- To learn about traffic analytics, see [Traffic analytics overview](traffic-analytics.md) and [Schema and data aggregation in Azure Network Watcher traffic analytics](traffic-analytics-schema.md). - To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
network-watcher Vnet Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-portal.md
Last updated 04/03/2024
Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
-In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure portal. You can also learn how to manage a VNet flow log using [Azure PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md).
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure portal. You can also learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md).
> [!IMPORTANT] > The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
-In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure CLI](vnet-flow-logs-cli.md).
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure portal](vnet-flow-logs-portal.md) or [Azure CLI](vnet-flow-logs-cli.md).
> [!IMPORTANT] > The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
description: Shows you how to quickly stand up IBM WebSphere Liberty and Open Li
Previously updated : 01/31/2024 Last updated : 04/04/2024
This article shows you how to quickly stand up IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift (ARO) using the Azure portal.
-This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to ARO. The offer automatically provisions several resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to ARO. The offer automatically provisions several resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operators, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty).
This article is intended to help you quickly get to deployment. Before going to
## Prerequisites -- A local machine with a Unix-like operating system installed (for example, Ubuntu, Azure Linux, or macOS, Windows Subsystem for Linux).
+- A local machine with a Unix-like operating system installed (for example, Ubuntu, macOS, or Windows Subsystem for Linux).
+- The [Azure CLI](/cli/azure/install-azure-cli). If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+* Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+* When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+* Run [az version](/cli/azure/reference-index?#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az-upgrade). This article requires at least version 2.31.0 of Azure CLI.
- A Java SE implementation, version 17 or later (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)). - [Maven](https://maven.apache.org/download.cgi) version 3.5.0 or higher. - [Docker](https://docs.docker.com/get-docker/) for your OS.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.31.0 or higher. - The Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role and the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
+> [!NOTE]
+> You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed, with the exception of Docker.
+>
+> :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
+ ## Get a Red Hat pull secret The Azure Marketplace offer you're going to use in this article requires a Red Hat pull secret. This section shows you how to get a Red Hat pull secret for Azure Red Hat OpenShift. To learn about what a Red Hat pull secret is and why you need it, see the [Get a Red Hat pull secret](/azure/openshift/tutorial-create-cluster?WT.mc_id=Portal-fx#get-a-red-hat-pull-secret-optional) section of [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](/azure/openshift/tutorial-create-cluster?WT.mc_id=Portal-fx). To get the pull secret for use, follow the steps in this section.
The following content is an example that was copied from the Red Hat console por
Save the secret to a file so you can use it later.
-<a name='create-an-azure-active-directory-service-principal-from-the-azure-portal'></a>
- ## Create a Microsoft Entra service principal from the Azure portal The Azure Marketplace offer you're going to use in this article requires a Microsoft Entra service principal to deploy your Azure Red Hat OpenShift cluster. The offer assigns the service principal with proper privileges during deployment time, with no role assignment needed. If you have a service principal ready to use, skip this section and move on to the next section, where you deploy the offer.
The steps in this section direct you to deploy IBM WebSphere Liberty or Open Lib
The following steps show you how to find the offer and fill out the **Basics** pane.
-1. In the search bar at the top of the Azure portal, enter *Liberty*. In the auto-suggested search results, in the **Marketplace** section, select **IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift**, as shown in the following screenshot.
+1. In the search bar at the top of the Azure portal, enter *Liberty*. In the auto-suggested search results, in the **Marketplace** section, select **IBM Liberty on ARO**, as shown in the following screenshot.
:::image type="content" source="media/howto-deploy-java-liberty-app/marketplace-search-results.png" alt-text="Screenshot of Azure portal showing IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift in search results." lightbox="media/howto-deploy-java-liberty-app/marketplace-search-results.png":::
The following steps show you how to find the offer and fill out the **Basics** p
1. The offer must be deployed in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *abc1228rg*.
+1. Create an environment variable in your shell for the resource group name.
+
+ ```bash
+ export RESOURCE_GROUP_NAME=<your-resource-group-name>
+ ```
+ 1. Under **Instance details**, select the region for the deployment. For a list of Azure regions where OpenShift operates, see [Regions for Red Hat OpenShift 4.x on Azure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=openshift&regions=all). 1. After selecting the region, select **Next**.
The following steps show you how to fill out the **ARO** pane shown in the follo
1. Under **Provide information to create a new cluster**, for **Red Hat pull secret**, fill in the Red Hat pull secret that you obtained in the [Get a Red Hat pull secret](#get-a-red-hat-pull-secret) section. Use the same value for **Confirm secret**.
-1. Fill in **Service principal client ID** with the service principal Application (client) ID that you obtained in the [Create a Microsoft Entra service principal from the Azure portal](#create-an-azure-active-directory-service-principal-from-the-azure-portal) section.
+1. Fill in **Service principal client ID** with the service principal Application (client) ID that you obtained in the [Create a Microsoft Entra service principal from the Azure portal](#create-a-microsoft-entra-service-principal-from-the-azure-portal) section.
-1. Fill in **Service principal client secret** with the service principal Application secret that you obtained in the [Create a Microsoft Entra service principal from the Azure portal](#create-an-azure-active-directory-service-principal-from-the-azure-portal) section. Use the same value for **Confirm secret**.
+1. Fill in **Service principal client secret** with the service principal Application secret that you obtained in the [Create a Microsoft Entra service principal from the Azure portal](#create-a-microsoft-entra-service-principal-from-the-azure-portal) section. Use the same value for **Confirm secret**.
1. After filling in the values, select **Next**.
The following steps guide you through creating an Azure SQL Database single data
> > :::image type="content" source="media/howto-deploy-java-liberty-app/create-sql-database-networking.png" alt-text="Screenshot of the Azure portal that shows the Networking tab of the Create SQL Database page with the Connectivity method and Firewall rules settings highlighted." lightbox="media/howto-deploy-java-liberty-app/create-sql-database-networking.png":::
+1. Create an environment variable in your shell for the resource group name for the database.
+
+ ```bash
+ export DB_RESOURCE_GROUP_NAME=<db-resource-group>
+ ```
+ Now that you created the database and ARO cluster, you can prepare the ARO to host your WebSphere Liberty application. ## Configure and deploy the sample application
Use the following steps to deploy and test the application:
To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, ARO cluster, Azure SQL Database, and all related resources. ```bash
-az group delete --name abc1228rg --yes --no-wait
-az group delete --name <db-resource-group> --yes --no-wait
+az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
+az group delete --name $DB_RESOURCE_GROUP_NAME --yes --no-wait
``` ## Next steps
operator-5g-core Quickstart Deploy 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/quickstart-deploy-5g-core.md
Title: How to Deploy Azure Operator 5G Core Preview
+ Title: Deploy Azure Operator 5G Core Preview
description: Learn how to deploy Azure Operator 5G core Preview using Bicep Scripts, PowerShell, and Azure CLI. Previously updated : 03/07/2024 Last updated : 04/08/2024 #CustomerIntent: As a < type of user >, I want < what? > so that < why? >. # Quickstart: Deploy Azure Operator 5G Core Preview
-Azure Operator 5G Core Preview is deployed using the Azure Operator 5G Core Resource Provider (RP). Bicep scripts are bundled along with empty parameter files for each Mobile Packet Core resource. These resources are:
+Azure Operator 5G Core Preview is deployed using the Azure Operator 5G Core Resource Provider (RP), which uses Bicep scripts bundled along with empty parameter files for each Mobile Packet Core resource.
+
+> [!NOTE]
+> The clusterservices resource must be created before any of the other services which can follow in any order. However, should you require observability services, then the observabilityservices resource should follow the clusterservices resource.
- Microsoft.MobilePacketCore/clusterServices - per cluster PaaS services
+- Microsoft.MobilePacketCore/observabilityServices - per cluster observability PaaS services (elastic/elastalert/kargo/kafka/etc)
- Microsoft.MobilePacketCore/amfDeployments - AMF/MME network function - Microsoft.MobilePacketCore/smfDeployments - SMF network function - Microsoft.MobilePacketCore/nrfDeployments - NRF network function - Microsoft.MobilePacketCore/nssfDeployments - NSSF network function - Microsoft.MobilePacketCore/upfDeployments - UPF network function-- Microsoft.MobilePacketCore/observabilityServices - per cluster observability PaaS services (elastic/elastalert/kargo/kafka/etc) ## Prerequisites Before you can successfully deploy Azure Operator 5G Core, you must: -- [Register your resource provider](../azure-resource-manager/management/resource-providers-and-types.md) for the HybridNetwork and MobilePacketCore namespaces.
+- [Register and verify the resource providers](../azure-resource-manager/management/resource-providers-and-types.md) for the HybridNetwork and MobilePacketCore namespaces.
+- Grant "Mobile Packet Core" service principal Contributor access at the subscription level (note this is a temporary requirement until the step is embedded as part of the RP registration).
+- Ensure that the network, subnet, and IP plans are ready for the resource parameter files.
-Based on your deployment environments, complete one of the following:
+Based on your deployment environments, complete one of the following prerequisites:
- [Prerequisites to deploy Azure Operator 5G Core Preview on Azure Kubernetes Service](quickstart-complete-prerequisites-deploy-azure-kubernetes-service.md). - [Prerequisites to deploy Azure Operator 5G Core Preview on Nexus Azure Kubernetes Service](quickstart-complete-prerequisites-deploy-nexus-azure-kubernetes-service.md) ## Post cluster creation
-After you complete the prerequisite steps and create a cluster, you must enable resources used to deploy Azure Operator 5G Core. The Azure Operator 5G Core resource provider manages the remote cluster through line-of-sight communications via Azure ARC. Azure Operator 5G Core workload is deployed through helm operator services provided by the Network Function Manager (NFM). To enable these services, the cluster must be ARC enabled, the NFM Kubernetes extension must be installed, and an Azure custom location must be created. The following Azure CLI commands describe how to enable these services. Run the commands from any command prompt displayed when you sign in using the `az-login` command.
+After you complete the prerequisite steps and create a cluster, you must enable resources used to deploy Azure Operator 5G Core. The Azure Operator 5G Core resource provider manages the remote cluster through line-of-sight communications via Azure ARC. Azure Operator 5G Core workload is deployed through helm operator services provided by the Network Function Manager (NFM). To enable these services, the cluster must be ARC enabled, the NFM Kubernetes extension must be installed, and an Azure custom location must be created. The following Azure CLI commands describe how to enable these services. Run the commands from any command prompt displayed when you sign in using the `az login` command.
## ARC-enable the cluster
ARC is used to enable communication from the Azure Operator 5G Core resource pro
Use the following Azure CLI command:
-`$ az connectedk8s connect --name <ARC NAME> --resource-group <RESOURCE GROUP> --custom-locations-oid <LOCATION> --kube-config <KUBECONFIG FILE>`
+```azurecli
+$ az connectedk8s connect --name <ARC NAME> --resource-group <RESOURCE GROUP> --custom-locations-oid <LOCATION> --kube-config <KUBECONFIG FILE>
+```
### ARC-enable the cluster for Nexus Azure Kubernetes Services Retrieve the Nexus AKS connected cluster ID with the following command. You need this cluster ID to create the custom location.
- `$ az connectedk8s show -n <NAKS-CLUSTER-NAME> -g <NAKS-RESOURCE-GRUP> --query id -o tsv`
+```azurecli
+$ az connectedk8s show -n <NAKS-CLUSTER-NAME> -g <NAKS-RESOURCE-GRUP> --query id -o tsv
+```
+ ## Install the Network Function Manager Kubernetes extension Execute the following Azure CLI command to install the Network Function Manager (NFM) Kubernetes extension:
-`$ az k8s-extension create --name networkfunction-operator --cluster-name <ARC NAME> --resource-group <RESOURCE GROUP> --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --auto-upgrade-minor-version true --scope cluster --release-namespace azurehybridnetwork --release-train preview --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator`
+```azurecli
+$ az k8s-extension create
+--name networkfunction-operator \
+--cluster-name <YourArcClusterName> \
+--resource-group <YourResourceGroupName> \
+--cluster-type connectedClusters \
+--extension-type Microsoft.Azure.HybridNetwork \
+--auto-upgrade-minor-version true \
+--scope cluster \
+--release-namespace azurehybridnetwork \
+--release-train preview \
+--config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator
+```
+Replace `YourArcClusterName` with the name of your Azure/Nexus Arc enabled Kubernetes cluster and `YourResourceGroupName` with the name of your resource group.
## Create an Azure custom location Enter the following Azure CLI command to create an Azure custom location:
-`$ az customlocation create -g <RESOURCE GROUP> -n <CUSTOM LOCATION NAME> --namespace azurehybridnetwork --host-resource-id /subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.Kubernetes/connectedClusters/<ARC NAME> --cluster-extension-ids /subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.Kubernetes/connectedClusters/<ARC NAME>/providers/Microsoft.KubernetesConfiguration/extensions/networkfunction-operator`
+```azurecli
+$ az customlocation create \
+ -g <YourResourceGroupName> \
+ -n <YourCustomLocationName> \
+ -l <YourAzureRegion> \
+ --namespace azurehybridnetwork
+ --host-resource-id
+/subscriptions/<YourSubscriptionId>/resourceGroups/<YourResourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<YourArcClusterName> --cluster-extension-ids /subscriptions/<YourSubscriptionId>/resourceGroups/<YourResourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<YourArcClusterName>/providers/Microsoft.KubernetesConfiguration/extensions/networkfunction-operator
+```
+
+Replace `YourResourceGroupName`, `YourCustomLocationName`, `YourAzureRegion`, `YourSubscriptionId`, and `YourArcClusterName` with your actual resource group name, custom location name, Azure region, subscription ID, and Azure Arc enabled Kubernetes cluster name respectively.
-## Populate the parameter files
+> [!NOTE]
+> The `--cluster-extension-ids` option is used to provide the IDs of the cluster extensions that should be associated with the custom location.
-The empty parameter files that were bundled with the Bicep scripts must be populated with values suitable for the cluster being deployed. Open each parameter file and add IP addresses, subnets, and storage account information.
+## Deploy Azure Operator 5G Core via Bicep scripts
-You can also modify the parameterized values yaml file to change tuning parameters such as cpu, memory limits, and requests. You can also add new parameters manually.
+Deployment of Azure Operator 5G Core consists of multiple resources including (clusterServices, amfDeployments, smfDeployments, upfDeployments, nrfDeployments, nssfDeployments, and observabilityServices). Each resource is deployed by an individual Bicep script and corresponding parameters file. Contact your Microsoft account contact to get access to the required Azure Operator 5G Core files.
-The Bicep scripts read these parameter files to produce a JSON object. The object is passed to Azure Resource Manager and used to deploy the Azure Operator 5G Core resource.
+> [!NOTE]
+> The required files are shared as a zip file.
-> [!IMPORTANT]
-> Any new parameters must be added to both the parameters file and the Bicep script file.
+Unpacking the zip file provides a bicep script for each Azure Operator 5G Core resource and corresponding parameter file. Note the file location of the unpacked file. The next sections describe the parameters you need to set for each resource and how to deploy via Azure CLI commands.
+
+## Populate the parameter files
+
+Mobile Packet Core resources are deployed via Bicep scripts that take parameters as input. The following tables describe the parameters to be supplied for each resource type.
+
+### Cluster Services parameters
+
+| CLUSTERSERVICES  | Description   | Platform  |
+|--|-|-|
+| `admin-password` | The admin password for all PaaS UIs. This password must be the same across all charts.  | all  |
+| `alert-host` | The alert host IP address  | Azure only  |
+| `alertmgr-lb-ip` | The IP address of the Prometheus Alert manager load balancer  | all  |
+| `customLocationId` | The customer location ID path   | all  |
+|`db-etcd-lb-ip` | The IP address of the ETCD server load balancer IP  | all  |
+| `elastic-password` | The Elasticsearch server admin password  | all  |
+| `elasticsearch-host`  | The Elasticsearch host IP address  | all  |
+| `fluentd-targets-host`  | The Fluentd target host IP address   | all  |
+| `grafana-lb-ip` | The IP address of the Grafana load balancer.  | all  |
+| `grafana-url` | The Grafana UI URL -&lt; https://IP:xxxx&gt; -  customer defined port number  | all  |
+| `istio-proxy-include-ip-ranges`  | The allowed Ingress IP ranges for Istio proxy. - default is " \* "    | all  |
+| `jaeger-host`  | The Jaeger target host IP address   | all  |
+| `kargo-lb-ip`  | The Kargo load balancer IP address   | all  |
+| `multus-deployed`  | boolean on whether Multus is deployed or not.  | Azure only  |
+| `nfs-filepath`  | The NFS (Network File System) file path where PaaS components store data - Nexus default "/filestore"  | Azure only  |
+| `nfs-server` | The NFS (Network File System) server IP address   | Azure only  |
+| `oam-lb-subnet`  | The subnet name for the OAM (Operations, Administration, and Maintenance) load balancer.   | Azure only  |
+| `redis-cluster-lb-ip`  | The IP address of the Redis cluster load balancer  | Nexus only  |
+| `redis-limit-cpu`  | The max CPU limit for each Redis server POD  | all  |
+| `redis-limit-mem`  | The max memory limit for each Redis POD  | all  |
+| `redis-primaries` | The number of Redis primary shard PODs  | all  |
+| `redis-replicas`  | The number of Redis replica instances for each primary shard  | all  |
+| `redis-request-cpu`  | The Min CPU request for each Redis POD  | all  |
+| `redis-request-mem`  | The min memory request for each Redis POD   | all  |
+| `thanos-lb-ip`  | The IP address of the Thanos load balancer.  | all  |
+| `timer-lb-ip`  | The IP address of the Timer load balancer.  | all  |
+|`tlscrt`  | The Transport Layer Security (TLS) certificate in plain text  used in cert manager  | all  |
+| `tlskey`  | The TLS key in plain text, used in cert manager  | all  |
+|`unique-name-suffix`  | The unique name suffix for all generated PaaS service logs  | all  |
+
+ 
+
+### AMF Deployments Parameters 
+
+| AMF Parameters  | Description   | Platform  |
+|--|--|-|
+| `admin-password`  | The password for the admin user.  |    |
+| `aes256cfb128Key` |  The AES-256-CFB-128 encryption key is Customer generated  | all  |
+| `amf-cfgmgr-lb-ip` | The IP address for the AMF Configuration Manager POD.  | all  |
+| `amf-ingress-gw-lb-ip`  | The IP address for the AMF Ingress Gateway load balancer POD IP   | all  |
+| `amf-ingress-gw-li-lb-ip`  | The IP address for the AMF Ingress Gateway Lawful intercept POD IP  | all  |
+| `amf-mme-ppe-lb-ip1 \*`  | The IP address for the AMF/MME external load balancer (for SCTP associations)   | all  |
+| `amf-mme-ppe-lb-ip2` | The IP address for the AMF/MME external load balancer (for SCTP associations)  (second IP).   | all  |
+| `elasticsearch-host` | The Elasticsearch host IP address  | all  |
+| `external-gtpc-svc-ip` | The IP address for the external GTP-C IP service address for N26 interface  | all  |
+| `fluentd-targets-host` | The Fluentd target host IP address  | all  |
+| `gn-lb-subnet` | The subnet name for the GN-interface load balancer.  | Azure only  |
+| `grafana-url` | The Grafana UI URL -&lt; https://IP:xxxx&gt; -  customer defined port number  | all  |
+| `gtpc\_agent-n26-mme` | The IP address for the GTPC agent N26 interface to the cMME. AMF-MME  | all  |
+| `gtpc\_agent-s10` | The IP address for the GTPC agent S10 interface - MME to MME   | all  |
+| `gtpc\_agent-s11-mme` | The IP address for the GTPC agent S11 interface to the cMME. - MME - SGW  | all  |
+| `gtpc-agent-ext-svc-name`| The external service name for the GTP-C (GPRS Tunneling Protocol Control Plane) agent.  | all  |
+| `gtpc-agent-ext-svc-type`  | The external service type for the GTPC agent.  | all  |
+| `gtpc-agent-lb-ip` | The IP address for the GTPC agent load balancer.  | all  |
+| `jaeger-host`  | The Jaeger target host IP address   | all  |
+| `li-lb-subnet` | The subnet name for the LI load balancer.  | all  |
+|`nfs-filepath` | The Network File System (NFS) file path where PaaS components store data  | Azure only  |
+|`nfs-server` | The NFS server IP address   | Azure only  |
+| `oam-lb-subnet` | The subnet name for the Operations, Administration, and Maintenance (OAM) load balancer.   | Azure only  |
+| `sriov-subnet`  | The name of the SRIOV subnet   | Azure only  |
+| `ulb-endpoint-ips1`  | Not required since we're using lb-ppe in Azure Operator 5G Core. Leave blank   | all  |
+| ulb-endpoint-ips2  | Not required since we're using lb-ppe in Azure Operator 5G Core. Leave blank   | all  |
+| `unique-name-suffix`  | The unique name suffix for all generated PaaS service logs  | all  |
+
+ 
+### SMF Deployment Parameters
+
+| SMF Parameters  | Description   | Platform  |
+|--|--|-|
+| `aes256cfb128Key` | The AES-256-CFB-128 encryption key. Default value is an empty string.  | all  |
+| `elasticsearch-host` | The Elasticsearch host IP address  | all  |
+| `fluentd-targets-host` | The Fluentd target host IP address  | all  |
+| `gn-lb-subnet` | The subnet name for the GN-interface load balancer.  | Azure only  |
+| `grafana-url` | The Grafana UI URL -&lt; https://IP:xxxx&gt; - customer defined port number  | all  |
+| `gtpc-agent-ext-svc-name` | The external service name for the GTPC agent.  | all  |
+| `gtpc-agent-ext-svc-type`  | The external service type for the GTPC agent.  | all  |
+| `gtpc-agent-lb-ip` | The IP address for the GTPC agent load balancer.  | all  |
+| `inband-data-agent-lb-ip` | The IP address for the inband data agent load balancer.   | all  |
+|`jaeger-host`  | The jaeger target host IP address  | all  |
+| `lcdr-filepath` | The filepath for the local CDR charging  | all  |
+| `li-lb-subnet`  | The subnet for the LI subnet.    | Azure only  |
+| `max-instances-in-smfset` | The maximum number of instances in the SMF set - value is set to 3  | all  |
+| `n4-lb-subnet`  | The subnet name for N4 load balancer service.   | Azure only  |
+| `nfs-filepath` | The NFS (Network File System) file path where PaaS components store data  | Azure only  |
+| `nfs-server` | The NFS (Network File System) server IP address   | Azure only  |
+| `oam-lb-subnet`  | The subnet name for the OAM (Operations, Administration, and Maintenance) load balancer.   | Azure only  |
+| `pfcp-c-loadbalancer-ip` | The IP address for the PFCP-C load balancer.  | all  |
+| `pfcp-ext-svc-name` | The external service name for the PFCP.  | all  |
+| `pfcp-ext-svc-type` | The external service type for the PFCP.  | all  |
+| `pfcp-lb-ip` | The IP address for the PFCP load balancer.  | all  |
+| `pod-lb-ppe-replicas` | The number of replicas for the POD LB PPE.  | all  |
+|`radius-agent-lb-ip` | The IP address for the RADIUS agent IP load balancer.  | all  |
+| `smf-cfgmgr-lb-ip`  | The IP address for the SMF Config manager load balancer.  | all  |
+| `smf-ingress-gw-lb-ip` | The IP address for the SMF Ingress Gateway load balancer.  | all  |
+| `smf-ingress-gw-li-lb-ip`  | The IP address for the SMF Ingress Gateway LI load balancer.  | all  |
+| `smf-instance-id` | The unique set ID identifying SMF in the set.  |    |
+|`smfset-unique-set-id` | The unique SMF set ID SMF in the set.   | all  |
+| `sriov-subnet` | The name of the SRIOV subnet   | Azure only  |
+| `sshd-cipher-suite`  | The cipher suite for SSH (Secure Shell) connections.  | all  |
+| `tls-cipher-suite` | The TLS cipher suite.  | all  |
+| `unique-name-suffix` | The unique name suffix for all PaaS service logs  | all  |
+
+### UPF Deployment Parameters 
+
+| UPF parameters  | Description   | Platform  |
+|--||-|
+| `admin-password` |  "admin"  |    |
+| `aes256cfb128Key` | The AES-256-CFB-128 encryption key. AES encryption key used by cfgmgr  | all  |
+|`alert-host` | The alert host IP address  | all  |
+| `elasticsearch-host` | The Elasticsearch host IP address  | all  |
+| `fileserver-cephfs-enabled-true-false` | A boolean value indicating whether CephFS is enabled for the file server.  |    |
+| `fileserver-cfg-storage-class-name` | The storage class name for file server storage.  | all  |
+| `fileserver-requests-storage` | The storage size for file server requests.  | all  |
+| `fileserver-web-storage-class-name` | The storage class name for file server web storage.  | all  |
+| `fluentd-targets-host` | The Fluentd target host IP address  | all  |
+| `gn-lb-subnet` | The subnet name for the GN-interface load balancer.  |    |
+| `grafana-url` | The Grafana UI URL -&lt; https://IP:xxxx&gt; -  customer defined port number  | all  |
+| `jaeger-host` | The jaeger target host IP address  | all  |
+| `l3am-max-ppe` | The maximum number of Packet processing engines (PPE) that are supported in user plane   | all  |
+|`l3am-spread-factor`  | The spread factor determines the number of PPE instances where sessions of a single PPE are backed up   | all  |
+| `n4-lb-subnet` | The subnet name for N4 load balancer service.   | Azure only  |
+| `nfs-filepath` | The NFS (Network File System) file path where PaaS components store data  | Azure only  |
+| `nfs-server` | The NFS (Network File System) server IP address   | Azure only  |
+| `oam-lb-subnet` | The subnet name for the OAM (Operations, Administration, and Maintenance) load balancer.   | Azure only  |
+| `pfcp-ext-svc-name` | The name of the PFCP (Packet Forwarding Control Protocol) external service.  | Azure only  |
+| `pfcp-u-external-fqdn` | The external fully qualified domain name for the PFCP-U.  | all  |
+| `pfcp-u-lb-ip` | The IP address for the PFCP-U (Packet Forwarding Control Protocol - User Plane) load balancer.  | all  |
+| `ppe-imagemanagement-requests-storage`  | The storage size for PPE (Packet Processing Engine) image management requests.  | all  |
+| `ppe-imagemanagement-storage-class-name` | The storage class name for PPE image management.  | all  |
+|`ppe-node-zone-resiliency-enabled` | A boolean value indicating whether PPE node zone resiliency is enabled.  | all  |
+| `sriov-subnet-1` | The subnet for SR-IOV (Single Root I/O Virtualization) interface 1.  | Azure only  |
+| `sriov-subnet-2` | The subnet for SR-IOV interface 2.  | Azure only  |
+| `sshd-cipher-suite` | The cipher suite for SSH (Secure Shell) connections.  | all  |
+| `tdef-enabled-true-false` | A boolean value indicating whether TDEF (Traffic Detection Function) is enabled. False is default  | Nexus only  |
+|`tdef-sc-name` | TDEF storage class name   | Nexus only  |
+| `tls-cipher-suite` | The cipher suite for TLS (Transport Layer Security) connections.  | all  |
+| `tvs-enabled-true-false` | A boolean value indicating whether TVS (Traffic video shaping) is enabled. Default is false  | Nexus only  |
+| `unique-name-suffix` | The unique name suffix for all PaaS service logs  | all  |
+| `upf-cfgmgr-lb-ip` | The IP address for the UPF configuration manager load balancer.  | all  |
+| `upf-ingress-gw-lb-fqdn` | The fully qualified domain name for the UPF ingress gateway LI.  | all  |
+| `upf-ingress-gw-lb-ip` | The IP address for the User Plane Function (UPF) ingress gateway load balancer.  | all  |
+| `upf-ingress-gw-li-fqdn` | The fully qualified domain name for the UPF ingress gateway load balancer.  | all  |
+| `upf-ingress-gw-li-ip` | The IP address for the UPF ingress gateway LI (Local Interface).  | all  |
++
+### NRF Deployment Parameters
+
+| NRF Parameters  | Description   | Platform  |
+|--|--|-|
+| `aes256cfb128Key`  |  The AES-256-CFB-128 encryption key is Customer generated  | All  |
+| `elasticsearch-host` | The Elasticsearch host IP address   | All  |
+| `grafana-url`  | The Grafana UI URL -&lt; https://IPaddress:xxxx&gt; , customer defined port number  | All  |
+| `jaeger-host` | The Jaeger target host IP address   | All  |
+| `nfs-filepath`  | The NFS (Network File System) file path where PaaS components store data  | Azure only  |
+| `nfs-server` | The NFS (Network File System) server IP address   | Azure only  |
+| `nrf-cfgmgr-lb-ip` | The IP address for the NRF Configuration Manager POD.  | All  |
+| `nrf-ingress-gw-lb-ip`  | The IP address of the load balancer for the NRF ingress gateway.  | All  |
+| `oam-lb-subnet`  | The subnet name for the OAM (Operations, Administration, and Maintenance) load balancer.   | Azure only  |
+| `unique-name-suffix`  | The unique name suffix for all generated PaaS service logs  | All  |
+
+ 
+### NSSF Deployment Parameters
+
+| NSSF Parameters  | Description   | Platform  |
+||--|-|
+|`aes256cfb128Key`  |  The AES-256-CFB-128 encryption key is Customer generated  | all  |
+| `elasticsearch-host` | The Elasticsearch host IP address  | all  |
+| `fluentd-targets-host` | The Fluentd target host IP address  | all  |
+| `grafana-url` | The Grafana UI URL -&lt; https://IP:xxxx&gt; - customer defined port number  | all  |
+| `jaeger-host`  | The Jaeger target host IP address   | all  |
+| `nfs-filepath`  | The NFS (Network File System) file path where PaaS components store data  | Azure only  |
+| `nfs-server` | The NFS (Network File System) server IP address   | Azure only  |
+| `nssf-cfgmgr-lb-ip` | The IP address for the NSSF Configuration Manager POD.  | all  |
+| `nssf-ingress-gw-lb-ip`  | The IP address for the NSSF Ingress Gateway load balancer IP  | all  |
+|`oam-lb-subnet`  | The subnet name for the OAM (Operations, Administration, and Maintenance) load balancer.   | Azure only  |
+|`unique-name-suffix`  | The unique name suffix for all generated PaaS service logs  | all  |
+
+ 
+### Observability Services Parameters 
+
+| OBSERVABILITY parameters  | Description   | Platform  |
+||--|-|
+| `admin-password`  | The admin password for all PaaS UIs. This password must be the same across all charts.  | all  |
+| `elastalert-lb-ip`  | The IP address of the Elastalert load balancer.  | all  |
+| `elastic-lb-ip`  | The IP address of the Elastic load balancer.  | all  |
+| `elasticsearch-host`  | The host IP of the Elasticsearch server IP  | all  |
+| `elasticsearch-server`  | The Elasticsearch UI server IP address  | all  |
+| `fluentd-targets-host`  | The host of the Fluentd server IP address  | all  |
+| `grafana-url`  | The Grafana UI URL -&lt; https://IP:xxxx&gt; -  customer defined port number  | all  |
+|`jaeger-lb-ip`  | The IP address of the Jaeger load balancer.  | all  |
+| `kafka-lb-ip`  | The IP address of the Kafka load balancer  | all  |
+| `keycloak-lb-ip`  | The IP address of the Keycloak load balancer  | all  |
+| `kibana-lb-ip` | The IP address of the Kibana load balancer  | all  |
+| `kube-prom-lb-ip` | The IP address of the Kube-prom load balancer  | all  |
+| `nfs-filepath`  | The NFS (Network File System) file path where PaaS components store data  | Azure only  |
+| `nfs-server`  | The NFS (Network File System) server IP address   | Azure only  |
+|`oam-lb-subnet`  | The subnet name for the OAM (Operations, Administration, and Maintenance) load balancer.   | Azure only  |
+| `unique-name-suffix`  | The unique name suffix for all PaaS service logs  | all  |
+|   |   |   |
+
## Deploy Azure Operator 5G Core via Azure Resource Manager
-You can deploy Azure Operator 5G Core resources by using either Azure CLI or PowerShell.
+You can deploy Azure Operator 5G Core resources by using Azure CLI. The following command deploys a single mobile packet core resource. To deploy a complete AO5GC environment, all resources must be deployed.
+
+The example command is run for the nrfDeployments resource. Similar commands run for the other resource types (SMF, AMF, UPF, NRF, NSSF). The observability components can also be deployed with the observability services resource making another request. There are a total of seven resources to deploy for a complete Azure Operator 5G Core deployment.
### Deploy using Azure CLI
+Set up the following environment variables:
+
+```azurecli
+$ export resourceGroupName=<Name of resource group>
+$ export templateFile=<Path to resource bicep script>
+$ export resourceName=<resource Name>
+$ export location <Azure region where resources are deployed>
+$ export templateParamsFile <Path to bicep script parameters file>
+```
+> [!NOTE]
+> Choose a name that contains all associated Azure Operator 5G Core resources for the resource name. Use the same resource name for clusterServices and all associated network function resources.
+
+Enter the following command to deploy Azure Operator 5G Core:
+ ```azurecli az deployment group create \ --name $deploymentName \
az deployment group create \
--template-file $templateFile \ --parameters $templateParamsFile ```-
-### Deploy using PowerShell
-
-```powershell
-New-AzResourceGroupDeployment `
--Name $deploymentName `--ResourceGroupName $resourceGroupName `--TemplateFile $templateFile `--TemplateParameterFile $templateParamsFile `--resourceName $resourceName
+The following shows a sample deployment:
+
+ ```azurecli
+PS C:\src\teest> az deployment group create `
+--resource-group ${ resourceGroupName } `
+--template-file ./releases/2403.0-31-lite/AKS/bicep/nrfTemplateSecret.bicep `
+--parameters resourceName=${ResourceName} `
+--parameters locationName=${location} `
+--parameters ./releases/2403.0-31-lite/AKS/params/nrfParams.json `
+--verbose
+
+INFO: Command ran in 288.481 seconds (init: 1.008, invoke: 287.473)
+
+{
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName /providers/Microsoft.Resources/deployments/nrfTemplateSecret",
+ "location": null,
+ "name": "nrfTemplateSecret",
+ "properties": {
+ "correlationId": "00000000-0000-0000-0000-000000000000",
+ "debugSetting": null,
+ "dependencies": [],
+ "duration": "PT4M16.5545373S",
+ "error": null,
+ "mode": "Incremental",
+ "onErrorDeployment": null,
+ "outputResources": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ resourceGroupName /providers/Microsoft.MobilePacketCore/nrfDeployments/test-505",
+ "resourceGroup": " resourceGroupName "
+ }
+ ],
+
+ "outputs": null,
+ "parameters": {
+ "locationName": {
+ "type": "String",
+ "value": " location "
+ },
+ "replacement": {
+ "type": "SecureObject"
+ },
+ "resourceName": {
+ "type": "String",
+ "value": " resourceName "
+ }
+ },
+ "parametersLink": null,
+ "providers": [
+ {
+ "id": null,
+ "namespace": "Microsoft.MobilePacketCore",
+ "providerAuthorizationConsentState": null,
+ "registrationPolicy": null,
+ "registrationState": null,
+ "resourceTypes": [
+ {
+ "aliases": null,
+ "apiProfiles": null,
+ "apiVersions": null,
+ "capabilities": null,
+ "defaultApiVersion": null,
+ "locationMappings": null,
+ "locations": [
+ " location "
+ ],
+ "properties": null,
+ "resourceType": "nrfDeployments",
+ "zoneMappings": null
+ }
+ ]
+ }
+ ],
+ "provisioningState": "Succeeded",
+ "templateHash": "3717219524140185299",
+ "templateLink": null,
+ "timestamp": "2024-03-12T16:07:49.470864+00:00",
+ "validatedResources": null
+ },
+ "resourceGroup": " resourceGroupName ",
+ "tags": null,
+ "type": "Microsoft.Resources/deployments"
+}
+
+PS C:\src\test>
```+ ## Next step - [Monitor the status of your Azure Operator 5G Core Preview deployment](quickstart-monitor-deployment-status.md)
operator-nexus Concepts Network Fabric Resource Update Commit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-network-fabric-resource-update-commit.md
+
+ Title: Update and commit Network Fabric resources
+description: Learn how Nexus Network Fabric's resource update flow allows you to batch and update a set of Network Fabric resources.
++++ Last updated : 04/03/2024+
+#CustomerIntent: As a <type of user>, I want <what?> so that <why?>.
++
+# Update and commit Network Fabric resources
+
+Currently, Nexus Network Fabric resources require that you disable a parent resource (such as an L3Isolation domain) and reput the parent or child resource with updated values and execute the administrative post action to enable and configure the devices. Network Fabric's new resource update flow allows you to batch and update a set of Network Fabric resources via a `commitConfiguration` POST action when resources are enabled. There's no change if you choose the current workflow of disabling L3 Isolation domain, making changes and the enabling L3 Isolation domain.
+
+## Network Fabric resource update overview
+
+Any Create, Update, Delete (CUD) operation on a child resource linked to an existing enabled parent resource or an update to an enabled parent resource property is considered an **Update** operation. A few examples would be a new Internal network, or a new subnet needs to be added to an existing enabled Layer 3 Isolation domain (Internal network is a child resource of Layer 3 isolation domain). A new route policy needs to be attached to existing internal network; both these scenarios qualify for an **Update** operation.
+
+Any update operation carried out on supported Network Fabric resources shown in the following table puts the fabric into a pending commit state (currently **Accepted** in Configuration state) where you must initiate a fabric commit-configuration action to apply the desired changes. All updates to Network Fabric resources (including child resources) in fabric follow the same workflow.
+
+Commit action/updates to resources shall only be valid and applicable when the fabric is in provisioned state and Network Fabric resources are in an **enabled administrative state. Updates to parent and child resources can be batched (across various Network Fabric resources) and a `commitConfiguration` action can be performed to execute all changes in a single POST action.
+
+Creation of parent resources and enablement via administrative action is independent of Update/Commit Action workflow. Additionally, all administrative actions to enable / disable are independent and shall not require commitConfiguration action trigger for execution. CommitConfiguration action is only applicable to a scenario when operator wants to update any existing Azure Resource Manager resources and fabric, parent resource is in enabled state. Any automation scripts or bicep templates that were used by the operators to create Network Fabric resource and enable require no changes.
+
+## User workflow
+
+To successfully execute update resources, fabric must be in provisioned state. The following steps are involved in updating Network Fabric resources.
+
+1. Operator updates the required Network Fabric resources (multiple resources updates can be batched) which were already enabled (config applied to devices) using update call on Network Fabric resources via AzCli, Azure Resource Manager, Portal. (Refer to the supported scenarios, resources, and parameters' details in the following table).
+
+ In the following example, a new `internalnetwork` is added to an existing L3Isolation **l3domain101523-sm**.
+
+ ```azurecli
+ az networkfabric internalnetwork create --subscription 5ffad143-8f31-4e1e-b171-fa1738b14748 --resource-group "Fab3Lab-4-1-PROD" --l3-isolation-domain-name "l3domain101523-sm" --resource-name "internalnetwork101523" --vlan-id 789 --mtu 1432 --connected-ipv4-subnets "[{prefix:'10.252.11.0/24'},{prefix:'10.252.12.0/24'}]
+ ```
+
+1. Once the Azure Resource Manager update call succeeds, the specific resource's `ConfigurationState` is set to **Accepted** and when it fails, it's set to **Rejected**. Fabric `ConfigurationState` is set to **Accepted** regardless of PATCH call success/failure.
+
+ If any Azure Resource Manager resource on the fabric (such as Internal Network or `RoutePolicy`) is in **Rejected** state, the Operator has to correct the configuration and ensure the specific resource's ConfigurationState is set to Accepted before proceeding further.
+
+2. Operator executes the commitConfiguration POST action on Fabric resource.
+
+ ```azurecli
+ az networkfabric fabric commit-configuration --subscription 5ffad143-8f31-4e1e-b171-fa1738b14748 --resource-group "FabLAB-4-1-PROD" --resource-name "nffab3-4-1-prod"
+ ```
+
+3. Service validates if all the resource updates succeeded and validates inputs. It also validates connected logical resources to ensure consistent behavior and configuration. Once all validations succeed, the new configuration is generated and pushed to the devices.
+
+1. Specific resource `configurationState` is reset to **Succeeded** and Fabric `configurationState` is set to **Provisioned**.
+1. If the `commitConfiguration` action fails, the service displays the appropriate error message and notifies the operator of the potential Network Fabric resource update failure.
++
+|State |Definition |Before Azure Resource Manager Resource Update |Before CommitConfiguration & Post Azure Resource Manager update |Post CommitConfiguration |
+|||||--|
+|**Administrative State** | State to represent administrative action performed on the resource | Enabled (only enabled is supported) | Enabled (only enabled is supported) |Enabled (user can disable) |
+|**Configuration State** | State to represent operator actions/service driven configurations |**Resource State** - Succeeded, <br> **Fabric State** Provisioned | **Resource State** <br>- Accepted (Success)<br>- Rejected (Failure) <br>**Fabric State** <br>- Accepted | **Resource State** <br> - Accepted (Failure), <br>- Succeeded (Success)<br> **Fabric State**<br> - Provisioned |
+|Provisioning State | State to represent Azure Resource Manager provisioning state of resources |Provisioned | Provisioned | Provisioned |
++
+## Supported Network Fabric resources and scenarios
+
+ Network Fabric Update Support Network Fabric resources (Network Fabric 4.1, Nexus 2310.1)
+
+| Network Fabric Resource | Type | Scenarios Supported | Scenarios Not Supported |Notes |
+| -- | -- | | -- | -- |
+| **Layer 2 Isolation Domain** | Parent | - Update to properties – MTU <br> - Addition/update tags | *Re-PUT* of resource | |
+| **Layer 3 Isolation Domain** | Parent | Update to properties <br> - redistribute connected. <br>- redistribute static routes. <br>- Aggregate route configuration <br>- connected subnet route policy. <br>Addition/update tags | *Re-PUT* of resource | |
+| **Internal Network** | Child (of L3 ISD) | Adding a new Internal network <br> Update to properties  <br>- MTU <br>- Addition/Update of connected IPv4/IPv6 subnets <br>- Addition/Update of IPv4/IPv6 RoutePolicy <br>- Addition/Update of Egress/Ingress ACL <br>- Update `isMonitoringEnabled` flag <br>- Addition/Update to Static routes <br>- BGP Config <br> Addition/update tags | - *Re-PUT* of resource. <br>- Deleting an Internal network when parent Layer 3 Isolation domain is enabled. | To delete the resource, the parent resource must be disabled |
+| **External Network** | Child (of L3 ISD) | Update to properties  <br>- Addition/Update of IPv4/IPv6 RoutePolicy <br>- Option A properties MTU, Addition/Update of Ingress and Egress ACLs, <br>- Option A properties – BFD Configuration <br>- Option B properties – Route Targets <br> Addition/Update of tags | - *Re-PUT* of resource. <br>- Creating a new external network <br>- Deleting an External network when parent Layer 3 Isolation domain is enabled. | To delete the resource, the parent resource must be disabled.<br><br> NOTE: Only one external network is supported per ISD. |
+| **Route Policy** | Parent | - Update entire statement including seq number, condition, action. <br>- Addition/update tags | - *Re-PUT* of resource. <br>- Update to Route Policy linked to a Network-to-Network Interconnect resource. | To delete the resource, the `connectedResource` (`IsolationDomain` or N-to-N Interconnect) shouldn't hold any reference. |
+| **IPCommunity** | Parent | Update entire ipCommunity rule including seq number, action, community members, well known communities. | *Re-PUT* of resource | To delete the resource, the connected `RoutePolicy` Resource shouldn't hold any reference. |
+| **IPPrefixes** | Parent | - Update the entire IPPrefix rule including seq number, networkPrefix, condition, subnetMask Length. <br>- Addition/update tags | *Re-PUT* of resource | To delete the resource, the connected `RoutePolicy` Resource shouldn't hold any reference. |
+| **IPExtendedCommunity** | Parent | - Update entire IPExtended community rule including seq number, action, route targets. <br>- Addition/update tags | *Re-PUT* of resource | To delete the resource, the connected `RoutePolicy` Resource shouldn't hold any reference.|
+| **ACLs** | Parent | - Addition/Update to match configurations and dynamic match configurations. <br>- Update to configuration type <br>- Addition/updating ACLs URL <br>- Addition/update tags | - *Re-PUT* of resource. <br>- Update to ACLs linked to a Network-to-Network Interconnect resource. | To delete the resource, the `connectedResource` (like `IsolationDomain` or N-to-N Interconnect) shouldn't hold any reference. |
+
+## Behavior notes and constraints
+
+- If a parent resource is in a **Disabled** administrative state and there are changes made to either to the parent or the child resources, the `commitConfiguration` action isn't applicable. Enabling the resource would push the configuration. The commit path for such resources is triggered only when the parent resource is in the **Enabled** administrative state.
+
+- If `commitConfiguration` fails, then the fabric remains in the **Accepted** in configuration state until the user addresses the issues and performs a successful `commitConfiguration`. Currently, only roll-forward mechanisms are provided when failure occurs.
+
+- If the Fabric configuration is in an **Accepted** state and has updates to Azure Resource Manager resources yet to be committed, then no administrative action is allowed on the resources.
+
+- If the Fabric configuration is in an **Accepted** state and has updates to Azure Resource Manager resources yet to be committed, then delete operation on supported resources can't be triggered.
+
+- Creation of parent resources is independent of `commitConfiguration` and the update flow. *Re-PUT* of resources isn't supported on any resource.
+
+- Network Fabric resource update is supported for both Greenfield deployments and Brownfield deployments but with some constraints.
+
+ - In the Greenfield deployment, the Fabric configuration state is **Accepted** once there are any updates done Network Fabric resources. Once the `commitConfiguration` action is triggered, it moves to either **Provisioned** or **Accepted** state depending on success or failure of the action.
+
+ - In the Brownfield deployment, the `commitConfiguration` action is supported but the supported Network Fabric resources (such as Isolation domains, Internal Networks, RoutePolicy & ACLs) must be created using general availability version of the API (2023-06-15). This temporary restriction is relaxed following the migration of all resources to the latest version.
+
+ - In the Brownfield deployment, the Fabric configuration state remains in a **Provisioned** state when there are changes to any supported Network Fabric resources or commitConfiguration action is triggered. This behavior is temporary until all fabrics are migrated to the latest version.
+
+- Route policy and other related resources (IP community, IP Extended Community, IP PrefixList) updates are considered as a list replace operation. All the existing statements are removed and only the new updated statements are configured.
+
+- Updating or removing existing subnets, routes, BGP configurations, and other relevant network params in Internal network or external networks configuration might cause traffic disruption and should be performed at operators discretion.
+
+- Update of new Route policies and ACLs might cause traffic disruption depending on the rules applied.
+
+- Use a list command on the specific resource type (list all resources of an internal network type) to verify the resources that are updated and aren't committed to device. The resources that have an **Accepted** or **Rejected** configuration state can be filtered and identified as resources that are yet to be committed or where the commit to device fails.
+
+For example:
+
+```azurecli
+az networkfabric internalnetwork list --resource-group "example-rg" --l3domain "example-l3domain"
+```
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
Two host network interfaces and one management network interface are created at
With the Azure Payment HSM provisioning service, customers have native access to two host network interfaces and one management interface on the payment HSM. This screenshot displays the Azure Payment HSM resources within a resource group. ## Why use Azure Payment HSM?
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The following table provides information on the Peering Service connectivity par
| [IIJ](https://www.iij.ad.jp/en/) | Japan | | [Intercloud](https://intercloud.com/what-we-do/partners/microsoft-saas/)| Europe | | [Kordia](https://www.kordia.co.nz/cloudconnect) | Oceania |
-| [LINX](https://www.linx.net/services/microsoft-azure-peering/) | Europe |
+| [LINX](https://www.linx.net/services/microsoft-azure-peering/) | Europe, North America |
| [Liquid Telecom](https://liquidc2.com/connect/#maps) | Africa | | [Lumen Technologies](https://www.ctl.io/microsoft-azure-peering-services/) | Asia, Europe, North America | | [MainOne](https://www.mainone.net/connectivity-services/cloud-connect/) | Africa |
The following table provides information on the Peering Service connectivity par
| Metro | Partners (IXPs) | |-|--| | Amsterdam | [AMS-IX](https://www.ams-ix.net/ams/service/microsoft-azure-peering-service-maps) |
-| Ashburn | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) |
+| Ashburn | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) , [LINX](https://www.linx.net/services/microsoft-azure-peering/) |
| Atlanta | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | | Barcelona | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | Chicago | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) |
The following table provides information on the Peering Service connectivity par
| Kuala Lumpur | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | London | [LINX](https://www.linx.net/services/microsoft-azure-peering/) | | Madrid | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) |
+| Manchester | [LINX](https://www.linx.net/services/microsoft-azure-peering/) |
| Marseilles | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | Mumbai | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | New York | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) |
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Title: Extensions
description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server. Previously updated : 3/19/2024 Last updated : 04/07/2024
Azure Database for PostgreSQL flexible server instance supports a subset of key
## Extension versions The following extensions are available in Azure Database for PostgreSQL flexible server:-
-|**Extension Name** |**Description** |**Postgres 16**|**Postgres 15**|**Postgres 14**|**Postgres 13**|**Postgres 12**|**Postgres 11**|
-|--|--|--|--|--|--|--||
-|[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) |Used to parse an address into constituent elements. |3.3.3 |3.1.1 |3.1.1 |3.1.1 |3.0.0 |2.5.1 |
-|[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html)|Address Standardizer US dataset example. |3.3.3 |3.1.1 |3.1.1 |3.1.1 |3.0.0 |2.5.1 |
-|[amcheck](https://www.postgresql.org/docs/13/amcheck.html) |Functions for verifying the logical consistency of the structure of relations. |1.3 |1.2 |1.2 |1.2 |1.2 |1.1 |
-|[anon](https://gitlab.com/dalibo/postgresql_anonymizer) |Mask or replace personally identifiable information (PII) or commercially sensitive data from a PostgreSQL database. |1.2.0 |1.2.0 |1.2.0 |1.2.0 |1.2.0 |N/A |
-|[azure_ai](./generative-ai-azure-overview.md) |Azure OpenAI and Cognitive Services integration for PostgreSQL. |0.1.0 |0.1.0 |0.1.0 |0.1.0 |N/A |N/A |
-|[azure_storage](../../postgresql/flexible-server/concepts-storage-extension.md) |Extension to export and import data from Azure Storage. |1.3 |1.3 |1.3 |1.3 |1.3 |N/A |
-|[bloom](https://www.postgresql.org/docs/13/bloom.html) |Bloom access method - signature file based index. |1 |1 |1 |1 |1 |1 |
-|[btree_gin](https://www.postgresql.org/docs/13/btree-gin.html) |Support for indexing common datatypes in GIN. |1.3 |1.3 |1.3 |1.3 |1.3 |1.3 |
-|[btree_gist](https://www.postgresql.org/docs/13/btree-gist.html) |Support for indexing common datatypes in GiST. |1.7 |1.5 |1.5 |1.5 |1.5 |1.5 |
-|[citext](https://www.postgresql.org/docs/13/citext.html) |Data type for case-insensitive character strings. |1.6 |1.6 |1.6 |1.6 |1.6 |1.5 |
-|[cube](https://www.postgresql.org/docs/13/cube.html) |Data type for multidimensional cubes. |1.5 |1.4 |1.4 |1.4 |1.4 |1.4 |
-|[dblink](https://www.postgresql.org/docs/13/dblink.html) |Connect to other PostgreSQL databases from within a database. |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[dict_int](https://www.postgresql.org/docs/13/dict-int.html) |Text search dictionary template for integers. |1 |1 |1 |1 |1 |1 |
-|[dict_xsyn](https://www.postgresql.org/docs/13/dict-xsyn.html) |Text search dictionary template for extended synonym processing. |1 |1 |1 |1 |1 |1 |
-|[earthdistance](https://www.postgresql.org/docs/13/earthdistance.html) |Calculate great-circle distances on the surface of the Earth. |1.1 |1.1 |1.1 |1.1 |1.1 |1.1 |
-|[fuzzystrmatch](https://www.postgresql.org/docs/13/fuzzystrmatch.html) |Determine similarities and distance between strings. |1.2 |1.1 |1.1 |1.1 |1.1 |1.1 |
-|[hstore](https://www.postgresql.org/docs/13/hstore.html) |Data type for storing sets of (key, value) pairs. |1.8 |1.7 |1.7 |1.7 |1.2 |1.1.2 |
-|[hypopg](https://github.com/HypoPG/hypopg) |Extension adding support for hypothetical indexes. |1.3.1 |1.3.1 |1.3.1 |1.3.1 |1.6 |1.5 |
-|[intagg](https://www.postgresql.org/docs/13/intagg.html) |Integer aggregator and enumerator. (Obsolete) |1.1 |1.1 |1.1 |1.1 |1.1 |1.1 |
-|[intarray](https://www.postgresql.org/docs/13/intarray.html) |Functions, operators, and index support for 1-D arrays of integers. |1.5 |1.3 |1.3 |1.3 |1.2 |1.2 |
-|[isn](https://www.postgresql.org/docs/13/isn.html) |Data types for international product numbering standards: EAN13, UPC, ISBN (books), ISMN (music), and ISSN (serials). |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[lo](https://www.postgresql.org/docs/13/lo.html) |Large object maintenance. |1.1 |1.1 |1.1 |1.1 |1.1 |1.1 |
-|[login_hook](https://github.com/splendiddata/login_hook) |Extension to execute some code on user login, comparable to Oracle's after logon trigger. |1.5 |1.4 |1.4 |1.4 |1.4 |1.4 |
-|[ltree](https://www.postgresql.org/docs/13/ltree.html) |Data type for hierarchical tree-like structures. |1.2 |1.2 |1.2 |1.2 |1.1 |1.1 |
-|[orafce](https://github.com/orafce/orafce) |Implements in Postgres some of the functions from the Oracle database that are missing. |4.4 |3.24 |3.18 |3.18 |3.18 |3.18 |
-|[pageinspect](https://www.postgresql.org/docs/13/pageinspect.html) |Inspect the contents of database pages at a low level. |1.12 |1.8 |1.8 |1.8 |1.7 |1.7 |
-|[pg_buffercache](https://www.postgresql.org/docs/13/pgbuffercache.html) |Examine the shared buffer cache. |1.4 |1.3 |1.3 |1.3 |1.3 |1.3 |
-|[pg_cron](https://github.com/citusdata/pg_cron) |Job scheduler for PostgreSQL. |1.5 |1.4 |1.4 |1.4 |1.4 |1.4 |
-|[pg_failover_slots](https://github.com/EnterpriseDB/pg_failover_slots) (preview) |Logical replication slot manager for failover purposes. |1.0.1 |1.0.1 |1.0.1 |1.0.1 |1.0.1 |1.0.1 |
-|[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) |Examine the free space map (FSM). |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[pg_hint_plan](https://github.com/ossc-db/pg_hint_plan) |Makes it possible to tweak PostgreSQL execution plans using so-called "hints" in SQL comments. |1.6.0 |1.4 |1.4 |1.4 |1.4 |1.4 |
-|[pg_partman](https://github.com/pgpartman/pg_partman) |Manage partitioned tables by time or ID. |4.7.1 |4.7.1 |4.6.1 |4.5.0 |4.5.0 |4.5.0 |
-|[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) |Prewarm relation data. |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[pg_repack](https://reorg.github.io/pg_repack/) |Lets you remove bloat from tables and indexes. |1.4.7 |1.4.7 |1.4.7 |1.4.7 |1.4.7 |1.4.7 |
-|[pg_squeeze](https://github.com/cybertec-postgresql/pg_squeeze) |A tool to remove unused space from a relation. |1.6 |1.5 |1.5 |1.5 |1.5 |1.5 |
-|[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) |Track execution statistics of all SQL statements executed. |1.1 |1.8 |1.8 |1.8 |1.7 |1.6 |
-|[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) |Text similarity measurement and index searching based on trigrams. |1.6 |1.5 |1.5 |1.5 |1.4 |1.4 |
-|[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) |Examine the visibility map (VM) and page-level visibility info. |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[pgaudit](https://www.pgaudit.org/) |Provides auditing functionality. |16.0 |1.7 |1.6.2 |1.5 |1.4 |1.3.1 |
-|[pgcrypto](https://www.postgresql.org/docs/13/pgcrypto.html) |Cryptographic functions. |1.3 |1.3 |1.3 |1.3 |1.3 |1.3 |
-|[pglogical](https://github.com/2ndQuadrant/pglogical) |Logical streaming replication. |2.4.4 |2.3.2 |2.3.2 |2.3.2 |2.3.2 |2.3.2 |
-|[pgrouting](https://pgrouting.org/) |Geospatial database to provide geospatial routing. |N/A |3.3.0 |3.3.0 |3.3.0 |3.3.0 |3.3.0 |
-|[pgrowlocks](https://www.postgresql.org/docs/13/pgrowlocks.html) |Show row-level locking information. |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[pgstattuple](https://www.postgresql.org/docs/13/pgstattuple.html) |Show tuple-level statistics. |1.5 |1.5 |1.5 |1.5 |1.5 |1.5 |
-|[pgvector](https://github.com/pgvector/pgvector) |Open-source vector similarity search for Postgres. |0.6.0 |0.6.0 |0.6.0 |0.6.0 |0.6.0 |0.5.1 |
-|[plpgsql](https://www.postgresql.org/docs/13/plpgsql.html) |PL/pgSQL procedural language. |1 |1 |1 |1 |1 |1 |
-|[plv8](https://github.com/plv8/plv8) |Trusted JavaScript language extension. |3.1.7 |3.1.7 |3.0.0 |3.0.0 |3.2.0 |3.0.0 |
-|[postgis](https://www.postgis.net/) |PostGIS geometry, geography. |3.3.3 |3.2.0 |3.2.0 |3.2.0 |3.2.0 |2.5.5 |
-|[postgis_raster](https://www.postgis.net/) |PostGIS raster types and functions. |3.3.3 |3.2.0 |3.2.0 |3.2.0 |3.2.0 |N/A |
-|[postgis_sfcgal](https://www.postgis.net/) |PostGIS SFCGAL functions. |3.3.3 |3.2.0 |3.2.0 |3.2.0 |3.2.0 |2.5.5 |
-|[postgis_tiger_geocoder](https://www.postgis.net/) |PostGIS tiger geocoder and reverse geocoder. |3.3.3 |3.2.0 |3.2.0 |3.2.0 |3.2.0 |2.5.5 |
-|[postgis_topology](https://postgis.net/docs/Topology.html) |PostGIS topology spatial types and functions. |3.3.3 |3.2.0 |3.2.0 |3.2.0 |3.2.0 |2.5.5 |
-|[postgres_fdw](https://www.postgresql.org/docs/13/postgres-fdw.html) |Foreign-data wrapper for remote PostgreSQL servers. |1.1 |1 |1 |1 |1 |1 |
-|[semver](https://pgxn.org/dist/semver/doc/semver.html) |Semantic version data type. |0.32.1 |0.32.0 |0.32.0 |0.32.0 |0.32.0 |0.32.0 |
-|[session_variable](https://github.com/splendiddata/session_variable) |Provides a way to create and maintain session scoped variables and constants. |3.3 |3.3 |3.3 |3.3 |3.3 |3.3 |
-|[sslinfo](https://www.postgresql.org/docs/13/sslinfo.html) |Information about SSL certificates. |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 |
-|[tablefunc](https://www.postgresql.org/docs/11/tablefunc.html) |Functions that manipulate whole tables, including crosstab. |1 |1 |1 |1 |1 |1 |
-|[tds_fdw](https://github.com/tds-fdw/tds_fdw) |PostgreSQL foreign data wrapper that can connect to databases that use the Tabular Data Stream (TDS) protocol, such as Sybase databases and Microsoft SQL server.|2.0.3 |2.0.3 |2.0.3 |2.0.3 |2.0.3 |2.0.3 |
-|[timescaledb](https://github.com/timescale/timescaledb) |Open-source relational database for time-series and analytics. |N/A |2.5.1 |2.5.1 |2.5.1 |2.5.1 |1.7.4 |
-|[tsm_system_rows](https://www.postgresql.org/docs/13/tsm-system-rows.html) |TABLESAMPLE method which accepts number of rows as a limit. |1 |1 |1 |1 |1 |1 |
-|[tsm_system_time](https://www.postgresql.org/docs/13/tsm-system-time.html) |TABLESAMPLE method which accepts time in milliseconds as a limit. |1 |1 |1 |1 |1 |1 |
-|[unaccent](https://www.postgresql.org/docs/13/unaccent.html) |Text search dictionary that removes accents. |1.1 |1.1 |1.1 |1.1 |1.1 |1.1 |
-|[uuid-ossp](https://www.postgresql.org/docs/13/uuid-ossp.html) |Generate universally unique identifiers (UUIDs). |1.1 |1.1 |1.1 |1.1 |1.1 |1.1 |
## dblink and postgres_fdw
For more details on restore method with Timescale enabled database, see [Timesca
### Restore a Timescale database using timescaledb-backup
-While running `SELECT timescaledb_post_restore()` procedure listed above you might get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
+While running `SELECT timescaledb_post_restore()` procedure listed above you might get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to back up and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
To do so, you should do following 1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup) 1. Create a target Azure Database for PostgreSQL flexible server instance and database
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
For more on SSL\TLS configuration on the client, see [PostgreSQL documentation](
> * For connectivity to servers deployed to Azure government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona): [DigiCert Global Root G2](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm) root CA certificates, as services are migrating from Digicert to Microsoft CA. > * For connectivity to servers deployed to Azure public cloud regions worldwide : [Digicert Global Root CA](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm), as services are migrating from Digicert to Microsoft CA.
-### Importing Root CA Certificates in Java Key Store on the client for certificate pinning scenarios
+### Downloading Root CA certificates and updating application clients in certificate pinning scenarios
-Custom-written Java applications use a default keystore, called *cacerts*, which contains trusted certificate authority (CA) certificates. It's also often known as Java trust store. A certificates file named *cacerts* resides in the security properties directory, java.home\lib\security, where java.home is the runtime environment directory (the jre directory in the SDK or the top-level directory of the JavaΓäó 2 Runtime Environment).
-You can use following directions to update client root CA certificates for client certificate pinning scenarios with PostgreSQL Flexible Server:
-1. Make a backup copy of your custom keystore.
-2. Download following certificates:
+To update client applications in certificate pinning scenarios you can download certificates from following URIs:
* For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona) download Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root G2 certificates from following URIs: Microsoft RSA Root Certificate Authority 2017 https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt, DigiCert Global Root G2 https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem. * For connectivity to servers deployed in Azure public regions worldwide download Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root CA certificates from following URIs: Microsoft RSA Root Certificate Authority 2017 https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt, Digicert Global Root CA https://cacerts.digicert.com/DigiCertGlobalRootCA.crt
-3. Optionally, to prevent future disruption, it's also recommended to add the following roots to the trusted store:
+* Optionally, to prevent future disruption, it's also recommended to add the following roots to the trusted store:
Microsoft ECC Root Certificate Authority 2017 - https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt
-4. Generate a combined CA certificate store with both Root CA certificates are included. Example below shows using DefaultJavaSSLFactory for PostgreSQL JDBC users.
- * For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona)
- ```powershell
-
-
- keytool -importcert -alias PostgreSQLServerCACert -file D:\ DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
-
-keytool -importcert -alias PostgreSQLServerCACert2 -file "D:\ Microsoft ECC Root Certificate Authority 2017.crt.pem" -keystore truststore -storepass password -noprompt
-```
- * For connectivity to servers deployed in Azure public regions worldwide
-```powershell
-
- keytool -importcert -alias PostgreSQLServerCACert -file D:\ DigiCertGlobalRootCA.crt.pem -keystore truststore -storepass password -noprompt
-
-keytool -importcert -alias PostgreSQLServerCACert2 -file "D:\ Microsoft ECC Root Certificate Authority 2017.crt.pem" -keystore truststore -storepass password -noprompt
-```
-
- 5. Replace the original keystore file with the new generated one:
-
-```java
-System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
-System.setProperty("javax.net.ssl.trustStorePassword","password");
-```
-6. Replace the original root CA pem file with the combined root CA file and restart your application/client.
-
-For more information on configuring client certificates with PostgreSQL JDBC driver, see this [documentation](https://jdbc.postgresql.org/documentation/ssl/)
-
-> [!NOTE]
-> Azure Database for PostgreSQL - Flexible server doesn't support [certificate based authentication](https://www.postgresql.org/docs/current/auth-cert.html) at this time.
-
-### Get list of trusted certificates in Java Key Store
-
-As stated above, Java, by default, stores the trusted certificates in a special file named *cacerts* that is located inside Java installation folder on the client.
-Example below first reads *cacerts* and loads it into *KeyStore* object:
-```java
-private KeyStore loadKeyStore() {
- String relativeCacertsPath = "/lib/security/cacerts".replace("/", File.separator);
- String filename = System.getProperty("java.home") + relativeCacertsPath;
- FileInputStream is = new FileInputStream(filename);
- KeyStore keystore = KeyStore.getInstance(KeyStore.getDefaultType());
- String password = "changeit";
- keystore.load(is, password.toCharArray());
-
- return keystore;
-}
-```
-The default password for *cacerts* is *changeit* , but should be different on real client, as administrators recommend changing password immediately after Java installation.
-Once we loaded KeyStore object, we can use the *PKIXParameters* class to read certificates present.
-```java
-public void whenLoadingCacertsKeyStore_thenCertificatesArePresent() {
- KeyStore keyStore = loadKeyStore();
- PKIXParameters params = new PKIXParameters(keyStore);
- Set<TrustAnchor> trustAnchors = params.getTrustAnchors();
- List<Certificate> certificates = trustAnchors.stream()
- .map(TrustAnchor::getTrustedCert)
- .collect(Collectors.toList());
-
- assertFalse(certificates.isEmpty());
-}
-```
-### Updating Root CA certificates when using clients in Azure App Services with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
-
-For Azure App services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios on updating client certificates and it depends on how on you're using SSL with your application deployed to Azure App Services.
-
-* Usually new certificates are added to App Service at platform level prior to changes in Azure Database for PostgreSQL - Flexible Server. If you are using the SSL certificates included on App Service platform in your application, then no action is needed. Consult following [Azure App Service documentation](../../app-service/configure-ssl-certificate.md) for more information.
-* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
-
- ### Updating Root CA certificates when using clients in Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
-
-If you're trying to connect to the Azure Database for PostgreSQL using applications hosted in Azure Kubernetes Services (AKS) and pinning certificates, it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-tls.md).
-
-### Updating Root CA certificates for For .NET (Npgsql) users on Windows with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
-
-For .NET (Npgsql) users on Windows, connecting to Azure Database for PostgreSQL - Flexible Servers deployed in Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona) make sure **both** Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root G2 both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
-
-For .NET (Npgsql) users on Windows, connecting to Azure Database for PostgreSQL - Flexible Servers deployed in Azure pubiic regions worldwide make sure **both** Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root CA **both** exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
---
-### Updating Root CA certificates for other clients for certificate pinning scenarios
-
-For other PostgreSQL client users, you can merge two CA certificate files like this format below:
--BEGIN CERTIFICATE--
-(Root CA1: DigiCertGlobalRootCA.crt.pem)
END CERTIFICATE--BEGIN CERTIFICATE--
-(Root CA2: Microsoft ECC Root Certificate Authority 2017.crt.pem)
END CERTIFICATE--
+Detailed information on updating client applications certificate stores with new Root CA certificates has been documented in this [tutorial](../flexible-server/how-to-update-client-certificates-java.md).
### Read Replicas with certificate pinning scenarios
Therefore, for clients that use **verify-ca** and **verify-full** sslmode config
* For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona): [DigiCert Global Root G2](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm) root CA certificates, as services are migrating from Digicert to Microsoft CA. * For connectivity to servers deployed to Azure public cloud regions worldwide: [Digicert Global Root CA](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm), as services are migrating from Digicert to Microsoft CA.
+> [!NOTE]
+> Azure Database for PostgreSQL - Flexible server doesn't support [certificate based authentication](https://www.postgresql.org/docs/current/auth-cert.html) at this time.
-## Testing SSL\TLS Connectivity
+## Testing SSL/TLS Connectivity
Before trying to access your SSL enabled server from client application, make sure you can get to it via psql. You should see output similar to the following if you established an SSL connection.
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Azure Database for PostgreSQL flexible server offers [PgBouncer](https://github.
PgBouncer uses a more lightweight model that utilizes asynchronous I/O, and only uses actual Postgres connections when needed, that is, when inside an open transaction, or when a query is active. This model can support thousands of connections more easily with low overhead and allows scaling to up to 10,000 connections with low overhead. When enabled, PgBouncer runs on port 6432 on your database server. You can change your applicationΓÇÖs database connection configuration to use the same host name, but change the port to 6432 to start using PgBouncer and benefit from improved idle connection scaling.
-PgBouncer in Azure database for PostgreSQL flexible server supports [Microsoft Entra authentication (AAD)](./concepts-azure-ad-authentication.md) authentication.
+PgBouncer in Azure database for PostgreSQL flexible server supports [Microsoft Entra authentication](./concepts-azure-ad-authentication.md).
> [!NOTE] > PgBouncer is supported on General Purpose and Memory Optimized compute tiers in both public access and private access networking.
You can configure PgBouncer, settings with these parameters:
For more information about PgBouncer configurations, see [pgbouncer.ini](https://www.pgbouncer.org/config.html).
+The following table shows the versions of PgBouncer currently deployed together with each major version of PostgreSQL:
++ > [!IMPORTANT] > Upgrading of PgBouncer is managed by Azure.
postgresql Generative Ai Azure Cognitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-cognitive.md
description: Create AI applications with sentiment analysis, summarization, or k
Previously updated : 03/18/2024 Last updated : 04/08/2024
select azure_ai.set_setting('azure_cognitive.region', '<API Key>');
### `azure_cognitive.analyze_sentiment` ```postgresql
-azure_cognitive.analyze_sentiment(text text, language text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT TRUE, disable_service_logs boolean DEFAULT false)
+azure_cognitive.analyze_sentiment(text text, language text DEFAULT NULL::text, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.analyze_sentiment(text text[], language text DEFAULT NULL::text, batch_size integer DEFAULT 10, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.analyze_sentiment(text text[], language text[] DEFAULT NULL::text[], batch_size integer DEFAULT 10, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+
+##### `batch_size`
+
+`integer DEFAULT 10` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.analyze_sentiment(text text, language text, timeout_ms integer D
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-##### `disable_service_logs`
+##### `max_attempts`
-`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for sentiment analysis if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for sentiment analysis, when it fails with any retryable error.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.sentiment_analysis_result` a result record containing the sentiment predictions of the input text. It contains the sentiment, which can be `positive`, `negative`, `neutral`, and `mixed`; and the score for positive, neutral, and negative found in the text represented as a real number between 0 and 1. For example in `(neutral,0.26,0.64,0.09)`, the sentiment is `neutral` with `positive` score at `0.26`, neutral at `0.64` and negative at `0.09`.
+`azure_cognitive.sentiment_analysis_result` or `TABLE(result azure_cognitive.sentiment_analysis_result)` a single element or a single-column table, depending on the overload of the function used, with the sentiment predictions of the input text. It contains the sentiment, which can be `positive`, `negative`, `neutral`, and `mixed`; and the score for positive, neutral, and negative found in the text represented as a real number between 0 and 1. For example in `(neutral,0.26,0.64,0.09)`, the sentiment is `neutral` with `positive` score at `0.26`, neutral at `0.64` and negative at `0.09`.
## Language detection
For more information, see Cognitive Services Compliance and Privacy notes at htt
### `azure_cognitive.detect_language` ```postgresql
-azure_cognitive.detect_language(text TEXT, timeout_ms INTEGER DEFAULT 3600000, throw_on_error BOOLEAN DEFAULT TRUE, disable_service_logs BOOLEAN DEFAULT FALSE)
+azure_cognitive.detect_language(text text, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.detect_language(text text[], batch_size integer DEFAULT 1000, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
+
+##### `batch_size`
+
+`integer DEFAULT 1000` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.detect_language(text TEXT, timeout_ms INTEGER DEFAULT 3600000, t
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-##### `disable_service_logs`
+##### `max_attempts`
-`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for language detection if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for language detection, when it fails with any retryable error.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.language_detection_result`, a result containing the detected language name, its two-letter ISO 639-1 representation, and the confidence score for the detection. For example in `(Portuguese,pt,0.97)`, the language is `Portuguese`, and detection confidence is `0.97`.
+`azure_cognitive.language_detection_result` or `TABLE(result azure_cognitive.language_detection_result)` a single element or a single-column table, depending on the overload of the function used, with the detected language name, its two-letter ISO 639-1 representation, and the confidence score for the detection. For example in `(Portuguese,pt,0.97)`, the language is `Portuguese`, and detection confidence is `0.97`.
## Key phrase extraction
For more information, see Cognitive Services Compliance and Privacy notes at htt
### `azure_cognitive.extract_key_phrases` ```postgresql
-azure_cognitive.extract_key_phrases(text TEXT, language TEXT, timeout_ms INTEGER DEFAULT 3600000, throw_on_error BOOLEAN DEFAULT TRUE, disable_service_logs BOOLEAN DEFAULT FALSE)
+azure_cognitive.extract_key_phrases(text text, language text DEFAULT NULL::text, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.extract_key_phrases(text text[], language text DEFAULT NULL::text, batch_size integer DEFAULT 10, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.extract_key_phrases(text text[], language text[] DEFAULT NULL::text[], batch_size integer DEFAULT 10, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+
+##### `batch_size`
+
+`integer DEFAULT 10` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.extract_key_phrases(text TEXT, language TEXT, timeout_ms INTEGER
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-##### `disable_service_logs`
+##### `max_attempts`
-`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for key phrase extraction if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for key phrase extraction, when it fails with any retryable error.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`text[]`, a collection of key phrases identified in the text. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"Cognitive Services Compliance","Privacy notes",information}`.
+`text[]` or `TABLE(key_phrases text[])` a single element or a single-column table, with the key phrases identified in the text. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"Cognitive Services Compliance","Privacy notes",information}`.
## Entity linking
For more information, see Cognitive Services Compliance and Privacy notes at htt
### `azure_cognitive.linked_entities` ```postgresql
-azure_cognitive.linked_entities(text text, language text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, disable_service_logs boolean DEFAULT false)
+azure_cognitive.linked_entities(text text, language text DEFAULT NULL::text, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.linked_entities(text text[], language text DEFAULT NULL::text, batch_size integer DEFAULT 5, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.linked_entities(text text[], language text[] DEFAULT NULL::text[], batch_size integer DEFAULT 5, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+
+##### `batch_size`
+
+`integer DEFAULT 5` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.linked_entities(text text, language text, timeout_ms integer DEF
`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+##### `max_attempts`
+
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for linked identities if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for linked identities, when it fails with any retryable error.
+ For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.linked_entity[]`, a collection of linked entities, where each defines the name, data source entity identifier, language, data source, URL, collection of `azure_cognitive.linked_entity_match` (defining the text and confidence score) and finally a Bing entity search API identifier. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"(\"Cognitive computing\",\"Cognitive computing\",en,Wikipedia,https://en.wikipedia.org/wiki/Cognitive_computing,\"{\"\"(\\\\\"\"Cognitive Services\\\\\"\",0.78)\
+`azure_cognitive.linked_entity[]` or `TABLE(entities azure_cognitive.linked_entity[])` an array or a single-column table, with the key phrases identified in the text, a collection of linked entities, where each defines the name, data source entity identifier, language, data source, URL, collection of `azure_cognitive.linked_entity_match` (defining the text and confidence score) and finally a Bing entity search API identifier. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"(\"Cognitive computing\",\"Cognitive computing\",en,Wikipedia,https://en.wikipedia.org/wiki/Cognitive_computing,\"{\"\"(\\\\\"\"Cognitive Services\\\\\"\",0.78)\
"\"}\",d73f7d5f-fddb-0908-27b0-74c7db81cd8d)","(\"Regulatory compliance\",\"Regulatory compliance\",en,Wikipedia,https://en.wikipedia.org/wiki/Regulatory_compliance ,\"{\"\"(Compliance,0.28)\"\"}\",89fefaf8-e730-23c4-b519-048f3c73cdbd)","(\"Information privacy\",\"Information privacy\",en,Wikipedia,https://en.wikipedia.org/wiki /Information_privacy,\"{\"\"(Privacy,0)\"\"}\",3d0f2e25-5829-4b93-4057-4a805f0b1043)"}`.
For more information, see Cognitive Services Compliance and Privacy notes at htt
[Named Entity Recognition (NER) feature in Azure AI](../../ai-services/language-service/named-entity-recognition/overview.md) can identify and categorize entities in unstructured text. ```postgresql
-azure_cognitive.recognize_entities(text text, language text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, disable_service_logs boolean DEFAULT false)
+azure_cognitive.recognize_entities(text text, language text DEFAULT NULL::text, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.recognize_entities(text text[], language text DEFAULT NULL::text, batch_size integer DEFAULT 5, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.recognize_entities(text text[], language text[] DEFAULT NULL::text[], batch_size integer DEFAULT 5, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+
+##### `batch_size`
+
+`integer DEFAULT 5` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.recognize_entities(text text, language text, timeout_ms integer
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-##### `disable_service_logs`
+##### `max_attempts`
-`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for linked identities if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for linked identities, when it fails with any retryable error.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.entity[]`, a collection of entities, where each defines the text identifying the entity, category of the entity and confidence score of the match. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"(\"Cognitive Services\",Skill,\"\",0.94)"}`.
+`azure_cognitive.entity[]` or `TABLE(entities azure_cognitive.entity[])` an array or a single-column table with entities, where each defines the text identifying the entity, category of the entity and confidence score of the match. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"(\"Cognitive Services\",Skill,\"\",0.94)"}`.
## Personally Identifiable data (PII) detection
For more information, see Cognitive Services Compliance and Privacy notes at htt
### `azure_cognitive.recognize_pii_entities` ```postgresql
-azure_cognitive.recognize_pii_entities(text text, language text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, domain text DEFAULT 'none'::text, disable_service_logs boolean DEFAULT true)
+azure_cognitive.recognize_pii_entities(text text, language text DEFAULT NULL::text, domain text DEFAULT 'none'::text, disable_service_logs boolean DEFAULT true, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.recognize_pii_entities(text text[], language text DEFAULT NULL::text, domain text DEFAULT 'none'::text, batch_size integer DEFAULT 5, disable_service_logs boolean DEFAULT true, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.recognize_pii_entities(text text[], language text[] DEFAULT NULL::text[], domain text DEFAULT 'none'::text, batch_size integer DEFAULT 5, disable_service_logs boolean DEFAULT true, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+
+##### `domain`
+
+`text DEFAULT 'none'::text`, the personal data domain used for personal data Entity Recognition. Valid values are `none` for no domain specified and `phi` for Personal Health Information.
+
+##### `batch_size`
+
+`integer DEFAULT 5` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT true` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.recognize_pii_entities(text text, language text, timeout_ms inte
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-##### `domain`
+##### `max_attempts`
-`text DEFAULT 'none'::text`, the personal data domain used for personal data Entity Recognition. Valid values are `none` for no domain specified and `phi` for Personal Health Information.
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for linked identities if it fails with any retryable error.
-##### `disable_service_logs`
+##### `retry_delay_ms`
-`boolean DEFAULT true` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for linked identities, when it fails with any retryable error.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.pii_entity_recognition_result`, a result containing the redacted text, and entities as `azure_cognitive.entity[]`. Each entity contains the nonredacted text, personal data category, subcategory, and a score indicating the confidence that the entity correctly matches the identified substring. For example, if invoked with a `text` set to `'My phone number is +1555555555, and the address of my office is 16255 NE 36th Way, Redmond, WA 98052.'`, and `language` set to `'en'`, it could return `("My phone number is ***********, and the address of my office is ************************************.","{""(+1555555555,PhoneNumber,\\""\\"",0.8)"",""(\\""16255 NE 36th Way, Redmond, WA 98052\\"",Address,\\""\\"",1)""}")`.
+`azure_cognitive.pii_entity_recognition_result` or `TABLE(result azure_cognitive.pii_entity_recognition_result)` a single value or a single-column table containing the redacted text, and entities as `azure_cognitive.entity[]`. Each entity contains the nonredacted text, personal data category, subcategory, and a score indicating the confidence that the entity correctly matches the identified substring. For example, if invoked with a `text` set to `'My phone number is +1555555555, and the address of my office is 16255 NE 36th Way, Redmond, WA 98052.'`, and `language` set to `'en'`, it could return `("My phone number is ***********, and the address of my office is ************************************.","{""(+1555555555,PhoneNumber,\\""\\"",0.8)"",""(\\""16255 NE 36th Way, Redmond, WA 98052\\"",Address,\\""\\"",1)""}")`.
## Document summarization
For more information, see Cognitive Services Compliance and Privacy notes at htt
[Document abstractive summarization](../../ai-services/language-service/summarization/overview.md) produces a summary that might not use the same words in the document but yet captures the main idea. ```postgresql
-azure_cognitive.summarize_abstractive(text text, language text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, sentence_count integer DEFAULT 3, disable_service_logs boolean DEFAULT false)
+azure_cognitive.summarize_abstractive(text text, language text DEFAULT NULL::text, sentence_count integer DEFAULT 3, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.summarize_abstractive(text text[], language text DEFAULT NULL::text, sentence_count integer DEFAULT 3, batch_size integer DEFAULT 25, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.summarize_abstractive(text text[], language text[] DEFAULT NULL::text[], sentence_count integer DEFAULT 3, batch_size integer DEFAULT 25, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
+
+##### `sentence_count`
+
+`integer DEFAULT 3`, maximum number of sentences that the summarization should contain.
+
+##### `batch_size`
+
+`integer DEFAULT 25` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+
+##### `disable_service_logs`
+
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
##### `timeout_ms`
azure_cognitive.summarize_abstractive(text text, language text, timeout_ms integ
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-##### `sentence_count`
+##### `max_attempts`
-`integer DEFAULT 3`, maximum number of sentences that the summarization should contain.
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for linked identities if it fails with any retryable error.
-##### `disable_service_logs`
+##### `retry_delay_ms`
-`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for linked identities, when it fails with any retryable error.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`text[]`, a collection of summaries with each one not exceeding the defined `sentence_count`. For example, if invoked with a `text` set to `'PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.'`, and `language` set to `'en'`, it could return `{"PostgreSQL is a database system with advanced features such as atomicity, consistency, isolation, and durability (ACID) properties. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. PostgreSQL was the default database for macOS Server and is available for Linux, BSD, OpenBSD, and Windows."}`.
+`text[]` or `TABLE(summaries text[])` an array or a single-column table of summaries with each one not exceeding the defined `sentence_count`. For example, if invoked with a `text` set to `'PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.'`, and `language` set to `'en'`, it could return `{"PostgreSQL is a database system with advanced features such as atomicity, consistency, isolation, and durability (ACID) properties. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. PostgreSQL was the default database for macOS Server and is available for Linux, BSD, OpenBSD, and Windows."}`.
### `azure_cognitive.summarize_extractive` [Document extractive summarization](../../ai-services/language-service/summarization/how-to/document-summarization.md) produces a summary extracting key sentences within the document. ```postgresql
-azure_cognitive.summarize_extractive(text text, language text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, sentence_count integer DEFAULT 3, sort_by text DEFAULT 'offset'::text, disable_service_logs boolean DEFAULT false)
+azure_cognitive.summarize_extractive(text text, language text DEFAULT NULL::text, sentence_count integer DEFAULT 3, sort_by text DEFAULT 'offset'::text, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.summarize_extractive(text text[], language text DEFAULT NULL::text, sentence_count integer DEFAULT 3, sort_by text DEFAULT 'offset'::text, batch_size integer DEFAULT 25, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.summarize_extractive(text text[], language text[] DEFAULT NULL::text[], sentence_count integer DEFAULT 3, sort_by text DEFAULT 'offset'::text, batch_size integer DEFAULT 25, disable_service_logs boolean DEFAULT false, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` #### Arguments ##### `text`
-`text` input to be processed.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `language`
-`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-
-##### `timeout_ms`
-
-`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-
-##### `throw_on_error`
-
-`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
##### `sentence_count`
azure_cognitive.summarize_extractive(text text, language text, timeout_ms intege
`text DEFAULT ``offset``::text`, order of extracted sentences. Valid values are `rank` and `offset`.
+##### `batch_size`
+
+`integer DEFAULT 25` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
+ ##### `disable_service_logs` `boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+##### `timeout_ms`
+
+`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
+
+##### `throw_on_error`
+
+`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
+
+##### `max_attempts`
+
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for linked identities if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for linked identities, when it fails with any retryable error.
+ For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.sentence[]`, a collection of extracted sentences along with their rank score.
+`azure_cognitive.sentence[]` or `TABLE(sentences azure_cognitive.sentence[])` an array or a single-column table of extracted sentences along with their rank score.
For example, if invoked with a `text` set to `'PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.'`, and `language` set to `'en'`, it could return `{"(\"PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures.\",0.16)","(\"It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users.\",0)","(\"It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.\",1)"}`. ## Language translation
For example, if invoked with a `text` set to `'PostgreSQL features transactions
### `azure_cognitive.translate` ```postgresql
-azure_cognitive.translate(text text, target_language text, timeout_ms integer DEFAULT NULL, throw_on_error boolean DEFAULT true, source_language text DEFAULT NULL, text_type text DEFAULT 'plain', profanity_action text DEFAULT 'NoAction', profanity_marker text DEFAULT 'Asterisk', suggested_source_language text DEFAULT NULL , source_script text DEFAULT NULL , target_script text DEFAULT NULL)
+azure_cognitive.translate(text text, target_language text, source_language text DEFAULT NULL::text, text_type text DEFAULT 'Plain'::text, profanity_action text DEFAULT 'NoAction'::text, profanity_marker text DEFAULT 'Asterisk'::text, suggested_source_language text DEFAULT NULL::text, source_script text DEFAULT NULL::text, target_script text DEFAULT NULL::text, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.translate(text text, target_language text[], source_language text DEFAULT NULL::text, text_type text DEFAULT 'Plain'::text, profanity_action text DEFAULT 'NoAction'::text, profanity_marker text DEFAULT 'Asterisk'::text, suggested_source_language text DEFAULT NULL::text, source_script text DEFAULT NULL::text, target_script text[] DEFAULT NULL::text[], timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.translate(text text[], target_language text, source_language text DEFAULT NULL::text, text_type text DEFAULT 'Plain'::text, profanity_action text DEFAULT 'NoAction'::text, profanity_marker text DEFAULT 'Asterisk'::text, suggested_source_language text DEFAULT NULL::text, source_script text DEFAULT NULL::text, target_script text DEFAULT NULL::text, batch_size integer DEFAULT 1000, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_cognitive.translate(text text[], target_language text[], source_language text DEFAULT NULL::text, text_type text DEFAULT 'Plain'::text, profanity_action text DEFAULT 'NoAction'::text, profanity_marker text DEFAULT 'Asterisk'::text, suggested_source_language text DEFAULT NULL::text, source_script text DEFAULT NULL::text, target_script text[] DEFAULT NULL::text[], batch_size integer DEFAULT 1000, timeout_ms integer DEFAULT NULL::integer, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
``` > [!NOTE]
For more information on parameters, see [Translator API](../../ai-services/trans
##### `text`
-`text` the input text to be translated
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, with the input to be processed.
##### `target_language`
-`text` two-letter ISO 639-1 representation of the language that you want the input text to be translated to. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-
-##### `timeout_ms`
-
-`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-
-##### `throw_on_error`
-
-`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
+`text` or `text[]` single value or array of values, depending on the overload of the function used, with the two-letter ISO 639-1 representation of the language(s) that the input is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
##### `source_language`
For more information on parameters, see [Translator API](../../ai-services/trans
##### `target_script` `text DEFAULT NULL` Specific script of the input text.
+##### `batch_size`
+
+`integer DEFAULT 1000` number of records to process at a time (only available for the overload of the function for which parameter `text` is of type `text[]`).
+
+##### `timeout_ms`
+
+`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
+
+##### `throw_on_error`
+
+`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
+
+##### `max_attempts`
+
+`integer DEFAULT 1` number of times the extension will retry calling the Azure Language Service endpoint for linked identities if it fails with any retryable error.
+
+##### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure Language Service endpoint for linked identities, when it fails with any retryable error.
++ #### Return type
-`azure_cognitive.translated_text_result`, a json array of translated texts. Details of the response body can be found in the [response body](../../ai-services/translator/reference/v3-0-translate.md#response-body).
+`azure_cognitive.translated_text_result` or `TABLE(result azure_cognitive.translated_text_result)` an array or a single-column table of translated texts. Details of the response body can be found in the [response body](../../ai-services/translator/reference/v3-0-translate.md#response-body).
## Examples
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
Title: Generate vector embeddings with Azure OpenAI in Azure Database for Postgr
description: Use vector indexes and Azure Open AI embeddings in PostgreSQL for retrieval augmented generation (RAG) patterns. Previously updated : 01/02/2024 Last updated : 04/05/2024
Invoke [Azure OpenAI embeddings](../../ai-services/openai/reference.md#embedding
In the Azure OpenAI resource, under **Resource Management** > **Keys and Endpoints** you can find the endpoint and the keys for your Azure OpenAI resource. To invoke the model deployment, enable the `azure_ai` extension using the endpoint and one of the keys. ```postgresql
-select azure_ai.set_setting('azure_openai.endpoint','https://<endpoint>.openai.azure.com');
+select azure_ai.set_setting('azure_openai.endpoint', 'https://<endpoint>.openai.azure.com');
select azure_ai.set_setting('azure_openai.subscription_key', '<API Key>'); ```
select azure_ai.set_setting('azure_openai.subscription_key', '<API Key>');
Invokes the Azure OpenAI API to create embeddings using the provided deployment over the given input. ```postgresql
-azure_openai.create_embeddings(deployment_name text, input text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true)
+azure_openai.create_embeddings(deployment_name text, input text, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
+azure_openai.create_embeddings(deployment_name text, input text[], batch_size integer DEFAULT 100, timeout_ms integer DEFAULT 3600000, throw_on_error boolean DEFAULT true, max_attempts integer DEFAULT 1, retry_delay_ms integer DEFAULT 1000)
```- ### Arguments #### `deployment_name`
azure_openai.create_embeddings(deployment_name text, input text, timeout_ms inte
#### `input`
-`text` input used to create embeddings.
+`text` or `text[]` single text or array of texts, depending on the overload of the function used, for which embeddings are created.
+
+#### `batch_size`
+
+`integer DEFAULT 100` number of records to process at a time (only available for the overload of the function for which parameter `input` is of type `text[]`).
#### `timeout_ms`
azure_openai.create_embeddings(deployment_name text, input text, timeout_ms inte
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
+#### `max_attempts`
+
+`integer DEFAULT 1` number of times the extension will retry calling the Azure OpenAI endpoint for embedding creation if it fails with any retryable error.
+
+#### `retry_delay_ms`
+
+`integer DEFAULT 1000` amount of time (milliseconds) that the extension will wait, before calling again the Azure OpenAI endpoint for embedding creation, when it fails with any retryable error.
+ ### Return type
-`real[]` a vector representation of the input text when processed by the selected deployment.
+`real[]` or `TABLE(embedding real[])` a single element or a single-column table, depending on the overload of the function used, with vector representations of the input text, when processed by the selected deployment.
## Use OpenAI to create embeddings and store them in a vector data type
postgresql How To Update Client Certificates Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-update-client-certificates-java.md
+
+ Title: Updating Client SSL/TLS Certificates for Java
+description: Learn about updating Java clients with Flexible Server using SSL and TLS.
++ Last updated : 04/04/2024+++++
+# Update Client TLS Certificates for Application Clients with Azure Database for PostgreSQL - Flexible Server
+++
+## Import Root CA Certificates in Java Key Store on the client for certificate pinning scenarios
+
+Custom-written Java applications use a default keystore, called *cacerts*, which contains trusted certificate authority (CA) certificates. It's also often known as Java trust store. A certificates file named *cacerts* resides in the security properties directory, java.home\lib\security, where java.home is the runtime environment directory (the jre directory in the SDK or the top-level directory of the JavaΓäó 2 Runtime Environment).
+You can use following directions to update client root CA certificates for client certificate pinning scenarios with PostgreSQL Flexible Server:
+1. Make a backup copy of your custom keystore.
+2. Download [certificates](../flexible-server/concepts-networking-ssl-tls.md#downloading-root-ca-certificates-and-updating-application-clients-in-certificate-pinning-scenarios)
+3. Generate a combined CA certificate store with both Root CA certificates are included. Example below shows using DefaultJavaSSLFactory for PostgreSQL JDBC users.
+
+ * For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona)
+ ```powershell
+
+
+ keytool -importcert -alias PostgreSQLServerCACert -file D:\ DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
+
+ keytool -importcert -alias PostgreSQLServerCACert2 -file "D:\ Microsoft ECC Root Certificate Authority 2017.crt.pem" -keystore truststore -storepass password -noprompt
+ ```
+ * For connectivity to servers deployed in Azure public regions worldwide
+ ```powershell
+
+ keytool -importcert -alias PostgreSQLServerCACert -file D:\ DigiCertGlobalRootCA.crt.pem -keystore truststore -storepass password -noprompt
+
+ keytool -importcert -alias PostgreSQLServerCACert2 -file "D:\ Microsoft ECC Root Certificate Authority 2017.crt.pem" -keystore truststore -storepass password -noprompt
+ ```
+
+ 5. Replace the original keystore file with the new generated one:
+
+ ```java
+ System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+ System.setProperty("javax.net.ssl.trustStorePassword","password");
+ ```
+6. Replace the original root CA pem file with the combined root CA file and restart your application/client.
+
+For more information on configuring client certificates with PostgreSQL JDBC driver, see this [documentation.](https://jdbc.postgresql.org/documentation/ssl/)
+++
+## Get list of trusted certificates in Java Key Store
+
+As stated above, Java, by default, stores the trusted certificates in a special file named *cacerts* that is located inside Java installation folder on the client.
+Example below first reads *cacerts* and loads it into *KeyStore* object:
+```java
+private KeyStore loadKeyStore() {
+ String relativeCacertsPath = "/lib/security/cacerts".replace("/", File.separator);
+ String filename = System.getProperty("java.home") + relativeCacertsPath;
+ FileInputStream is = new FileInputStream(filename);
+ KeyStore keystore = KeyStore.getInstance(KeyStore.getDefaultType());
+ String password = "changeit";
+ keystore.load(is, password.toCharArray());
+
+ return keystore;
+}
+```
+The default password for *cacerts* is *changeit* , but should be different on real client, as administrators recommend changing password immediately after Java installation.
+Once we loaded KeyStore object, we can use the *PKIXParameters* class to read certificates present.
+```java
+public void whenLoadingCacertsKeyStore_thenCertificatesArePresent() {
+ KeyStore keyStore = loadKeyStore();
+ PKIXParameters params = new PKIXParameters(keyStore);
+ Set<TrustAnchor> trustAnchors = params.getTrustAnchors();
+ List<Certificate> certificates = trustAnchors.stream()
+ .map(TrustAnchor::getTrustedCert)
+ .collect(Collectors.toList());
+
+ assertFalse(certificates.isEmpty());
+}
+```
+## Update Root CA certificates when using clients in Azure App Services with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+
+For Azure App services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios on updating client certificates and it depends on how on you're using SSL with your application deployed to Azure App Services.
+
+* Usually new certificates are added to App Service at platform level prior to changes in Azure Database for PostgreSQL - Flexible Server. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. Consult following [Azure App Service documentation](../../app-service/configure-ssl-certificate.md) for more information.
+* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
+
+ ## Update Root CA certificates when using clients in Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+
+If you're trying to connect to the Azure Database for PostgreSQL using applications hosted in Azure Kubernetes Services (AKS) and pinning certificates, it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-tls.md).
+
+## Updating Root CA certificates for .NET (Npgsql) users on Windows with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+
+For .NET (Npgsql) users on Windows, connecting to Azure Database for PostgreSQL - Flexible Servers deployed in Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona) make sure **both** Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root G2 both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
+
+For .NET (Npgsql) users on Windows, connecting to Azure Database for PostgreSQL - Flexible Servers deployed in Azure public regions worldwide make sure **both** Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root CA **both** exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
+++
+## Updating Root CA certificates for other clients for certificate pinning scenarios
+
+For other PostgreSQL client users, you can merge two CA certificate files like this format below.
+
+```azurecli
++
+--BEGIN CERTIFICATE--
+(Root CA1: DigiCertGlobalRootCA.crt.pem)
+--END CERTIFICATE--
+--BEGIN CERTIFICATE--
+(Root CA2: Microsoft ECC Root Certificate Authority 2017.crt.pem)
+--END CERTIFICATE--
+```
+
+## Related content
+
+- Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+- Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Public access (allowed IP addresses)** option in [the Azure portal](how-to-manage-firewall-portal.md) or [the Azure CLI](how-to-manage-firewall-cli.md).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. Azure Database
| UAE Central* | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | UAE North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
-| US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
| US Gov Virginia | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:| | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 4/4/2024 Last updated : 4/8/2024 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 4/4/2024
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Azure Database for PostgreSQL flexible server.
+## Release: April 2024
+* Support for new [minor versions](./concepts-supported-versions.md) 16.2, 15.6, 14.11, 13.14, 12.18 <sup>$</sup>
+* Support for new [PgBouncer versions](./concepts-pgbouncer.md) 1.22.1 <sup>$</sup>
+ ## Release: March 2024 * Public preview of [Major Version Upgrade Support for PostgreSQL 16](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server. * Public preview of [real-time language translations](generative-ai-azure-cognitive.md#language-translation) with azure_ai extension on Azure Database for PostgreSQL flexible server.
postgresql Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/service-overview.md
Previously updated : 12/20/2023 Last updated : 04/07/2024 adobe-target: true
Azure Database for PostgreSQL flexible server powered by the PostgreSQL communit
### Azure Database for PostgreSQL flexible server
-Azure Database for PostgreSQL flexible server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Azure Database for PostgreSQL flexible server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. Azure Database for PostgreSQL flexible server currently supports community version of PostgreSQL 11, 12, 13 and 14, with plans to add newer versions soon. Azure Database for PostgreSQL flexible server is generally available today in a wide variety of [Azure regions](overview.md#azure-regions).
+Azure Database for PostgreSQL flexible server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Azure Database for PostgreSQL flexible server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. Azure Database for PostgreSQL flexible server currently supports community version of PostgreSQL 11, 12, 13, 14, 15, and 16 with plans to add newer versions as they become available. Azure Database for PostgreSQL flexible server is generally available today in a wide variety of [Azure regions](overview.md#azure-regions).
-Azure Database for PostgreSQL flexible server instances are best suited for
+Azure Database for PostgreSQL flexible server instances are best suited for:
- Application developments requiring better control and customizations - Cost optimization controls with ability to stop/start server
private-5g-core Azure Private 5G Core Release Notes 2308 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2308.md
The following table shows the support status for different Packet Core releases.
| Release | Support Status | ||-|
-| AP5GC 2308 | Supported until AP5GC 2401 released |
+| AP5GC 2308 | Supported until AP5GC 2403 released |
| AP5GC 2307 | Supported until AP5GC 2310 released | | AP5GC 2306 and earlier | Out of Support |
private-5g-core Azure Private 5G Core Release Notes 2310 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2310.md
Last updated 11/30/2023
# Azure Private 5G Core 2310 release notes
-The following release notes identify the new features, critical open issues, and resolved issues for the 2308 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
+The following release notes identify the new features, critical open issues, and resolved issues for the 2310 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
This article applies to the AP5GC 2310 release (2310.0-8). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2309 release and supports the 2023-09-01, 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
The following table shows the support status for different Packet Core releases
| Release | Support Status | ||-|
-| AP5GC 2310 | Supported until AP5GC 2403 is released |
-| AP5GC 2308 | Supported until AP5GC 2401 is released |
+| AP5GC 2310 | Supported until AP5GC 2404 is released |
+| AP5GC 2308 | Supported until AP5GC 2403 is released |
| AP5GC 2307 and earlier | Out of Support | ## What's new
private-5g-core Azure Private 5G Core Release Notes 2403 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2403.md
+
+ Title: Azure Private 5G Core 2403 release notes
+description: Discover what's new in the Azure Private 5G Core 2403 release
++++ Last updated : 04/04/2023++
+# Azure Private 5G Core 2403 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2403 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
+
+This article applies to the AP5GC 2403 release (2403.0-2). This release is compatible with the Azure Stack Edge (ASE) Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2403 release and supports the 2023-09-01, 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+
+For more information about compatibility, see [Packet core and Azure Stack Edge compatibility](azure-stack-edge-packet-core-compatibility.md).
+
+For more information about new features in Azure Private 5G Core, see [What's New Guide](whats-new.md).
+
+## Support lifetime
+
+Packet core versions are supported until two subsequent versions are released (unless otherwise noted). You should plan to upgrade your packet core in this time frame to avoid losing support.
+
+### Currently supported packet core versions
+The following table shows the support status for different Packet Core releases and when they're expected to no longer be supported.
+
+| Release | Support Status |
+||-|
+| AP5GC 2403 | Supported until AP5GC 2407 is released |
+| AP5GC 2310 | Supported until AP5GC 2404 is released |
+| AP5GC 2308 and earlier | Out of Support |
+
+## What's new
+
+### TCP Maximum Segment Size (MSS) Clamping
+
+TCP session initial setup messages that include a Maximum Segment Size (MSS) value, which controls the size limit of packets transmitted during the session. The packet core will now automatically set this value, where necessary, to ensure packets aren't too large for the core to transmit. This reduces packet loss due to oversized packets arriving at the core's interfaces, and reduces the need for fragmentation and reassembly, which are costly procedures.
+
+## Issues fixed in the AP5GC 2403 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue | SKU Fixed In |
+ |--|--|--|--|-|
+ | 1 | Local distributed tracing | In Multi PDN session establishment/Release call flows with different DNs, the distributed tracing web GUI fails to display some of 4G NAS messages (Activate/deactivate Default EPS Bearer Context Request) and some S1AP messages (ERAB request, ERAB Release). | 2403.0-2 |
+ | 2 | Packet Forwarding | A slight(0.01%) increase in packet drops is observed in latest AP5GC release installed on ASE Platform Pro 2 with ASE-2309 for throughput higher than 3.0 Gbps. | 2403.0-2 |
+
+## Known issues in the AP5GC 2403 release
+<!--**TO BE UPDATED**>
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | | | |
+<-->
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Local distributed tracing | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory doesn't transmit via the web proxy. If there's a firewall blocking traffic that doesn't go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
+
+## Next steps
+
+- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-5g-core Azure Stack Edge Packet Core Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md
The following table provides information on which versions of the ASE device are
| Packet core version | ASE Pro GPU compatible versions | ASE Pro 2 compatible versions | |--|--|--|
-! 2310 | 2309, 2312 | 2309, 2312 |
+| 2403 | 2403, 2405 | 2403, 2405 |
+| 2310 | 2309, 2312, 2403 | 2309, 2312, 2403 |
| 2308 | 2303, 2309 | 2303, 2309 | | 2307 | 2303 | 2303 | | 2306 | 2303 | 2303 |
private-5g-core Support Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/support-lifetime.md
The following table shows the support status for different Packet Core releases
| Release | Support Status | ||-|
-| AP5GC 2310 | Supported until AP5GC 2403 is released |
-| AP5GC 2308 | Supported until AP5GC 2401 is released |
-| AP5GC 2307 and earlier | Out of Support |
+| AP5GC 2403 | Supported until AP5GC 2407 is released |
+| AP5GC 2310 | Supported until AP5GC 2404 is released |
+| AP5GC 2308 and earlier | Out of Support |
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
Last updated 12/21/2023
To help you stay up to date with the latest developments, this article covers: -- New features, improvements and fixes for the online service.
+- New features, improvements, and fixes for the online service.
- New releases for the packet core, referencing the packet core release notes for further information. This page is updated regularly with the latest developments in Azure Private 5G Core.
+## April 2024
+
+### TCP Maximum Segment Size (MSS) Clamping
+
+**Type:** New feature
+
+**Date available:** April 04, 2024
+
+TCP session initial setup messages that include a Maximum Segment Size (MSS) value, which controls the size limit of packets transmitted during the session. The packet core will now automatically set this value, where necessary, to ensure packets aren't too large for the core to transmit. This reduces packet loss due to oversized packets arriving at the core's interfaces, and reduces the need for fragmentation and reassembly, which are costly procedures.
+ ## March 2024+ ### Azure Policy support **Type:** New feature
See [Azure Policy policy definitions for Azure Private 5G Core](azure-policy-ref
**Date available:** March 22, 2024
-The SUPI (subscription permanent identifier) secret needs to be encrypted before being transmitted over the radio network as a SUCI (subscription concealed identifier). The concealment is performed by the UEs on registration, and deconcealment is performed by the packet core. You can now securely manage the required private keys through the Azure Portal and provision SIMs with public keys.
+The SUPI (subscription permanent identifier) secret needs to be encrypted before being transmitted over the radio network as a SUCI (subscription concealed identifier). The concealment is performed by the UEs on registration, and deconcealment is performed by the packet core. You can now securely manage the required private keys through the Azure portal and provision SIMs with public keys.
For more information, see [Enable SUPI concealment](supi-concealment.md).
private-multi-access-edge-compute-mec Affirmed Private Network Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/affirmed-private-network-service-overview.md
- Title: 'What is Affirmed Private Network Service on Azure?'
-description: Learn about Affirmed Private Network Service solutions on Azure for private LTE/5G networks.
---- Previously updated : 06/16/2021---
-# What is Affirmed Private Network Service on Azure?
-
-The Affirmed Private Network Service (APNS) is a managed network service offering created for managed service providers and mobile network operators to provide private LTE and private 5G solutions to enterprises.
-
-Affirmed has combined its mobile core-technology with AzureΓÇÖs capabilities to create a complete turnkey solution for private LTE/5G networks to help carriers and enterprises take advantage of managed networks and the mobile edge. The combination of cloud management and automation allows managed service providers to deliver a fully managed infrastructure and also brings a complete end-to-end solution for operators to pick the best of breed Radio Access Network, SIM, and Azure services from a rich ecosystem of partners offered in Azure Marketplace. The solution is composed of five components:
--- **Cloud-native Mobile Core**: This component is 3GPP standards compliant and supports network functions for both 4G and 5G and has virtual network probes located natively within the mobile core. The mobile core can be deployed on VMs, physical servers, or on an operator's cloud, eliminating the need for dedicated hardware.--- **Private Network Service Manager - Affirmed Networks**: Private Network Service Manager is the application that operators use to deploy, monitor, and manage private mobile core networks on the Azure platform. It features a complete set of management capabilities including simple self-activation and management of private network resources through a programmatic GUI-driven portal.--- **Azure Network Functions Manager**: Azure Network Functions Manager (NFM) is a fully managed cloud-native orchestration service that enables customers to deploy and provision network functions on Azure Stack Edge Pro with GPU for a consistent hybrid experience using the Azure portal.--- **Azure Cloud**: A public cloud computing platform with solutions including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) that can be used for services such as analytics, virtual computing, storage, networking, and much more.--- **Azure Stack Edge**: A cloud-managed, hardware-as-a-service solution shipped by Microsoft. It brings the Azure cloudΓÇÖs power to a local and robust server that can be deployed virtually anywhere local AI and advanced computing tasks need to be performed.---
-## Why use the Affirmed Private Network Solution?
-APNS provides the following key benefits to operators and their customers:
--- **Deployment Flexibility** - APNS employs Control and User Plane Separation technology and supports three types of deployment modes to address a variety of operator desired scenarios for offering to enterprises. By using the Private Network Service Manager, operators can configure the following deployment models:-
- - Standalone enables operators to provide a complete standalone private network on premises by delivering the RAN, 5G core on the Azure Stack Edge and the management layer on the centralized cloud.
-
- - Distributed enables faster processing of data by distributing the user plane closer to the edge of the enterprise on the Azure Stack Edge while the control plane is on the cloud; an example of such a model would be manufacturing facilities.
-
- - All in Cloud allows for the entire 5G core to be deployed on the cloud while the RAN is on the edge, enabling dynamic allocation of cloud resources to suit the changing demands of the workloads.
--- **MNO Integration** - APNS is mobile network operator integrated, which means it provides complete mobility across private and public operator networks with its distributed subscriber core. Operators have the advantage to scale the private mobile network to 1000s of enterprise edge sites.-
- - Supports all Spectrum options - MNO Licensed, Private Licensed, CBRS, Shared, Unlicensed.
-
- - Supports isolated/standalone private networks, multi-site roaming, and macro roaming as it is MNO Integrated.
-
- - Can provide 99.999% service availability and inter-work with any 3GPP compliant LTE and 5G NR radio. Has Carrier-Grade resiliency for enterprises.
--- **Automation and Ease of Management** - The APNS solution can be completely managed remotely through Service Manager on the Azure cloud. Through the Service Manager, end-users have access to their personalized dashboard and can manage, view, and turn on/off devices on the private mobile network. Operators can monitor the status of the networks for network issues and key parameters to ensure optimal performance.-
- - Provides secure, reliable, high bandwidth, low latency private mobile networking service that runs on Azure private multi-access edge compute.
-
- - Supports complete remote management, without needing truck rolls.
-
- - Provides cloud automation to enable operators to offer managed services to enterprises or to partner with MSPs who in turn can offer managed services.
--- **Smarter Network & Business Insights** - Affirmed mobile core has an embedded virtual probe/ packet brokering function that can be used to provide network insight. The operator can use these insights to better drive network decisions while their customers can use these insights to drive smarter monetization decisions.--- **Data Privacy & Security** - APNS uses Azure to deliver security and compliance across private networks and enterprise applications. Operators can confidently deploy the solution for industry use cases that require stringent data privacy laws, such as healthcare, government, public safety, and defense.-
-## Next steps
-- Learn how to [deploy the Affirmed private Network Service solution](deploy-affirmed-private-network-service-solution.md)---
private-multi-access-edge-compute-mec Deploy Affirmed Private Network Service Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/deploy-affirmed-private-network-service-solution.md
- Title: 'Deploy Affirmed Private Network Service on Azure'
-description: Learn how to deploy the Affirmed Private Network Service solution on Azure
---- Previously updated : 06/16/2021--
-# Deploy Affirmed Private Network Service on Azure
-
-This article provides a high-level overview of the process of deploying Affirmed Private Network Service (APNS) solution on an Azure Stack Edge device via the Microsoft Azure Marketplace.
-
-The following diagram shows the system architecture of the Affirmed Private Network Service, including the resources required to deploy.
-
-![Affirmed Private Network Service deployment](media/deploy-affirmed-private-network-service/deploy-affirmed-private-network-service.png)
-
-## Collect required information
-
-To deploy APNS, you must have the following resources:
--- A configured Azure Network Function Manager - Device object which serves as the digital twin of the Azure Stack Edge device. --- A fully deployed Azure Stack Edge with NetFoundry VM. --- Subscription approval for the Affirmed Management Systems VM Offer and APNS Managed Application. --- An Azure account with an active subscription and access to the following: -
- - The built-in **Owner** Role for your resource group.
-
- - The built-in **Managed Application Contributor** role for your subscription.
-
- - A virtual network and subnet to join (open ports tcp/443 and tcp/8443).
-
- - 5 IP addresses on the virtual subnet.
-
- - A valid SAS Token provided by Affirmed Release Engineering.
-
- - An administrative username/password to program during the deployment.
-
-## Deploy APNS
-
-To automatically deploy the APNS Managed application with all required resources and relevant information necessary, select the APNS Managed Application from the Microsoft Azure Marketplace. When you deploy APNS, all the required resources are automatically created for you and are contained in a Managed Resource Group.
-
-Complete the following procedure to deploy APNS:
-1. Open the Azure portal and select **Create a resource**.
-2. Enter *APNS* in the search bar and press Enter.
-3. Select **View Private Offers**.
- > [!NOTE]
- > The APNS Managed application will not appear until **View Private Offers** is selected.
-4. Select **Create** from the dropdown menu of the **Private Offer**, then select the option to deploy.
-5. Complete the application setup, network settings, and review and create.
-6. Select **Deploy**.
-
-## Next steps
--- For information about Affirmed Private Network Service, see [What is Affirmed Private Network Service on Azure?](affirmed-private-network-service-overview.md).
remote-rendering Graphics Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/graphics-bindings.md
StartupRemoteRendering(managerInit); // static function in namespace Microsoft::
``` The call above must be called before any other Remote Rendering APIs are accessed.
-Similarly, the corresponding de-init function `RemoteManagerStatic.ShutdownRemoteRendering();` should be called after all other Remote Rendering objects are already destoyed.
+Similarly, the corresponding de-init function `RemoteManagerStatic.ShutdownRemoteRendering();` should be called after all other Remote Rendering objects are already destroyed.
For WMR `StartupRemoteRendering` also needs to be called before any holographic API is called. For OpenXR the same applies for any OpenXR related APIs. ## <span id="access">Accessing graphics binding
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | <a name='reservations-administrator'></a>[Reservations Administrator](./built-in-roles/management-and-governance.md#reservations-administrator) | Lets one read and manage all the reservations in a tenant | a8889054-8d42-49c9-bc1c-52486c10e7cd | > | <a name='reservations-reader'></a>[Reservations Reader](./built-in-roles/management-and-governance.md#reservations-reader) | Lets one read all the reservations in a tenant | 582fc458-8989-419f-a480-75249bc5db7e | > | <a name='resource-policy-contributor'></a>[Resource Policy Contributor](./built-in-roles/management-and-governance.md#resource-policy-contributor) | Users with rights to create/modify resource policy, create support ticket and read resources/hierarchy. | 36243c78-bf99-498c-9df9-86d9f8d28608 |
+> | <a name='scheduled-patching-contributor'></a>[Scheduled Patching Contributor](./built-in-roles/management-and-governance.md#scheduled-patching-contributor) | Provides access to manage maintenance configurations with maintenance scope InGuestPatch and corresponding configuration assignments | cd08ab90-6b14-449c-ad9a-8f8e549482c6 |
> | <a name='site-recovery-contributor'></a>[Site Recovery Contributor](./built-in-roles/management-and-governance.md#site-recovery-contributor) | Lets you manage Site Recovery service except vault creation and role assignment | 6670b86e-a3f7-4917-ac9b-5d6ab1be4567 | > | <a name='site-recovery-operator'></a>[Site Recovery Operator](./built-in-roles/management-and-governance.md#site-recovery-operator) | Lets you failover and failback but not perform other Site Recovery management operations | 494ae006-db33-4328-bf46-533a6560a3ca | > | <a name='site-recovery-reader'></a>[Site Recovery Reader](./built-in-roles/management-and-governance.md#site-recovery-reader) | Lets you view Site Recovery status but not perform other management operations | dbaa88c4-0c30-4179-9fb3-46319faa6149 |
role-based-access-control Management And Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/management-and-governance.md
Users with rights to create/modify resource policy, create support ticket and re
} ```
+## Scheduled Patching Contributor
+
+Provides access to manage maintenance configurations with maintenance scope InGuestPatch and corresponding configuration assignments
+
+[Learn more](/azure/update-manager/scheduled-patching)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/maintenanceConfigurations/read | Read maintenance configuration. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/maintenanceConfigurations/write | Create or update maintenance configuration. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/maintenanceConfigurations/delete | Delete maintenance configuration. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/configurationAssignments/read | Read maintenance configuration assignment. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/configurationAssignments/write | Create or update maintenance configuration assignment. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/configurationAssignments/delete | Delete maintenance configuration assignment. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/configurationAssignments/maintenanceScope/InGuestPatch/read | Read maintenance configuration assignment for InGuestPatch maintenance scope. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/configurationAssignments/maintenanceScope/InGuestPatch/write | Create or update a maintenance configuration assignment for InGuestPatch maintenance scope. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/configurationAssignments/maintenanceScope/InGuestPatch/delete | Delete maintenance configuration assignment for InGuestPatch maintenance scope. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/maintenanceConfigurations/maintenanceScope/InGuestPatch/read | Read maintenance configuration for InGuestPatch maintenance scope. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/maintenanceConfigurations/maintenanceScope/InGuestPatch/write | Create or update a maintenance configuration for InGuestPatch maintenance scope. |
+> | [Microsoft.Maintenance](../permissions/management-and-governance.md#microsoftmaintenance)/maintenanceConfigurations/maintenanceScope/InGuestPatch/delete | Delete maintenance configuration for InGuestPatch maintenance scope. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Provides access to manage maintenance configurations with maintenance scope InGuestPatch and corresponding configuration assignments",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/cd08ab90-6b14-449c-ad9a-8f8e549482c6",
+ "name": "cd08ab90-6b14-449c-ad9a-8f8e549482c6",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Maintenance/maintenanceConfigurations/read",
+ "Microsoft.Maintenance/maintenanceConfigurations/write",
+ "Microsoft.Maintenance/maintenanceConfigurations/delete",
+ "Microsoft.Maintenance/configurationAssignments/read",
+ "Microsoft.Maintenance/configurationAssignments/write",
+ "Microsoft.Maintenance/configurationAssignments/delete",
+ "Microsoft.Maintenance/configurationAssignments/maintenanceScope/InGuestPatch/read",
+ "Microsoft.Maintenance/configurationAssignments/maintenanceScope/InGuestPatch/write",
+ "Microsoft.Maintenance/configurationAssignments/maintenanceScope/InGuestPatch/delete",
+ "Microsoft.Maintenance/maintenanceConfigurations/maintenanceScope/InGuestPatch/read",
+ "Microsoft.Maintenance/maintenanceConfigurations/maintenanceScope/InGuestPatch/write",
+ "Microsoft.Maintenance/maintenanceConfigurations/maintenanceScope/InGuestPatch/delete"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Scheduled Patching Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Site Recovery Contributor Lets you manage Site Recovery service except vault creation and role assignment
role-based-access-control Management And Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/management-and-governance.md
Azure service: Microsoft Monitoring Insights
> | Microsoft.Intune/diagnosticsettings/delete | Deleting a diagnostic setting | > | Microsoft.Intune/diagnosticsettingscategories/read | Reading a diagnostic setting categories |
+## Microsoft.Maintenance
+
+Azure service: [Azure Maintenance](/azure/virtual-machines/maintenance-configurations), [Azure Update Manager](/azure/update-manager/overview)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.Maintenance/applyUpdates/write | Write apply updates to a resource. |
+> | Microsoft.Maintenance/applyUpdates/read | Read apply updates to a resource. |
+> | Microsoft.Maintenance/configurationAssignments/write | Create or update maintenance configuration assignment. |
+> | Microsoft.Maintenance/configurationAssignments/read | Read maintenance configuration assignment. |
+> | Microsoft.Maintenance/configurationAssignments/delete | Delete maintenance configuration assignment. |
+> | Microsoft.Maintenance/configurationAssignments/maintenanceScope/InGuestPatch/write | Create or update a maintenance configuration assignment for InGuestPatch maintenance scope. |
+> | Microsoft.Maintenance/configurationAssignments/maintenanceScope/InGuestPatch/read | Read maintenance configuration assignment for InGuestPatch maintenance scope. |
+> | Microsoft.Maintenance/configurationAssignments/maintenanceScope/InGuestPatch/delete | Delete maintenance configuration assignment for InGuestPatch maintenance scope. |
+> | Microsoft.Maintenance/maintenanceConfigurations/write | Create or update maintenance configuration. |
+> | Microsoft.Maintenance/maintenanceConfigurations/read | Read maintenance configuration. |
+> | Microsoft.Maintenance/maintenanceConfigurations/delete | Delete maintenance configuration. |
+> | Microsoft.Maintenance/maintenanceConfigurations/eventGridFilters/delete | Notifies Microsoft.Maintenance that an EventGrid Subscription for Maintenance Configuration is being deleted. |
+> | Microsoft.Maintenance/maintenanceConfigurations/eventGridFilters/read | Notifies Microsoft.Maintenance that an EventGrid Subscription for Maintenance Configuration is being viewed. |
+> | Microsoft.Maintenance/maintenanceConfigurations/eventGridFilters/write | Notifies Microsoft.Maintenance that a new EventGrid Subscription for Maintenance Configuration is being created. |
+> | Microsoft.Maintenance/maintenanceConfigurations/maintenanceScope/InGuestPatch/write | Create or update a maintenance configuration for InGuestPatch maintenance scope. |
+> | Microsoft.Maintenance/maintenanceConfigurations/maintenanceScope/InGuestPatch/read | Read maintenance configuration for InGuestPatch maintenance scope. |
+> | Microsoft.Maintenance/maintenanceConfigurations/maintenanceScope/InGuestPatch/delete | Delete maintenance configuration for InGuestPatch maintenance scope. |
+> | Microsoft.Maintenance/updates/read | Read updates to a resource. |
+ ## Microsoft.ManagedServices Azure service: [Azure Lighthouse](/azure/lighthouse/)
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Click the resource provider name in the following list to see the list of permis
> | [Microsoft.Features](./permissions/management-and-governance.md#microsoftfeatures) | | [Azure Resource Manager](/azure/azure-resource-manager/) | > | [Microsoft.GuestConfiguration](./permissions/management-and-governance.md#microsoftguestconfiguration) | Audit settings inside a machine using Azure Policy. | [Azure Policy](/azure/governance/policy/) | > | [Microsoft.Intune](./permissions/management-and-governance.md#microsoftintune) | Enable your workforce to be productive on all their devices, while keeping your organization's information protected. | |
+> | [Microsoft.Maintenance](./permissions/management-and-governance.md#microsoftmaintenance) | | [Azure Maintenance](/azure/virtual-machines/maintenance-configurations)<br/>[Azure Update Manager](/azure/update-manager/overview) |
> | [Microsoft.ManagedServices](./permissions/management-and-governance.md#microsoftmanagedservices) | | [Azure Lighthouse](/azure/lighthouse/) | > | [Microsoft.Management](./permissions/management-and-governance.md#microsoftmanagement) | Use management groups to efficiently apply governance controls and manage groups of Azure subscriptions. | [Management Groups](/azure/governance/management-groups/) | > | [Microsoft.PolicyInsights](./permissions/management-and-governance.md#microsoftpolicyinsights) | Summarize policy states for the subscription level policy definition. | [Azure Policy](/azure/governance/policy/) |
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md
Previously updated : 01/02/2024 Last updated : 04/07/2024
Several Azure resources have a dependency on a subscription or a directory. Depe
> This section lists the known Azure services or resources that depend on your subscription. Because resource types in Azure are constantly evolving, there might be additional dependencies not listed here that can cause a breaking change to your environment. | Service or resource | Impacted | Recoverable | Are you impacted? | What you can do |
-| | | | | |
+| | :: | :: | | |
| Role assignments | Yes | Yes | [List role assignments](#save-all-role-assignments) | All role assignments are permanently deleted. You must map users, groups, and service principals to corresponding objects in the target directory. You must re-create the role assignments. | | Custom roles | Yes | Yes | [List custom roles](#save-custom-roles) | All custom roles are permanently deleted. You must re-create the custom roles and any role assignments. | | System-assigned managed identities | Yes | Yes | [List managed identities](#list-role-assignments-for-managed-identities) | You must disable and re-enable the managed identities. You must re-create the role assignments. |
Several Azure resources have a dependency on a subscription or a directory. Depe
| Azure Service Fabric | Yes | No | | You must re-create the cluster. For more information, see [SF Clusters FAQ](../service-fabric/service-fabric-common-questions.md) or [SF Managed Clusters FAQ](../service-fabric/faq-managed-cluster.yml) | | Azure Service Bus | Yes | Yes | |You must delete, re-create, and attach the managed identities to the appropriate resource. You must re-create the role assignments. | | Azure Synapse Analytics Workspace | Yes | Yes | | You must update the tenant ID associated with the Synapse Analytics Workspace. If the workspace is associated with a Git repository, you must update the [workspace's Git configuration](../synapse-analytics/cicd/source-control.md#switch-to-a-different-git-repository). For more information, see [Recovering Synapse Analytics workspace after transferring a subscription to a different Microsoft Entra directory (tenant)](../synapse-analytics/how-to-recover-workspace-after-tenant-move.md). |
+| Azure Databricks | Yes | No | | Currently, Azure Databricks does not support moving workspaces to a new tenant. For more information, see [Manage your Azure Databricks account](/azure/databricks/administration-guide/account-settings/#move-workspace-between-tenants-unsupported). |
> [!WARNING] > If you are using encryption at rest for a resource, such as a storage account or SQL database, that has a dependency on a key vault that is being transferred, it can lead to an unrecoverable scenario. If you have this situation, you should take steps to use a different key vault or temporarily disable customer-managed keys to avoid this unrecoverable scenario.
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
The **Optical character recognition (OCR)** skill recognizes printed and handwri
An OCR skill uses the machine learning models provided by [Azure AI Vision](../ai-services/computer-vision/overview.md) API [v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) in Azure AI services. The **OCR** skill maps to the following functionality: + For the languages listed under [Azure AI Vision language support](../ai-services/computer-vision/language-support.md#optical-character-recognition-ocr), the [Read API](../ai-services/computer-vision/overview-ocr.md) is used.
-+ For Greek and Serbian Cyrillic, the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used.
+++ For Greek and Serbian Cyrillic, the legacy [OCR in version 3.2](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/cognitiveservices/data-plane/ComputerVision/stable/v3.2) API is used. The **OCR** skill extracts text from image files. Supported file formats include:
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. </p>This parameter only applies if the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. |
+| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. </p>This parameter only applies if the [legacy OCR version 3.2](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/cognitiveservices/data-plane/ComputerVision/stable/v3.2) API is used. |
| `defaultLanguageCode` | Language code of the input text. Supported languages include all of the [generally available languages](../ai-services/computer-vision/language-support.md#analyze-image) of Azure AI Vision. You can also specify `unk` (Unknown). </p>If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned.| | `lineEnding` | The value to use as a line separator. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". |
The above skillset example assumes that a normalized-images field exists. To gen
} ``` -- ## See also + [What is optical character recognition](../ai-services/computer-vision/overview-ocr.md)
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
- ignite-2023 Previously updated : 02/22/2024 Last updated : 04/03/2024 # Make outbound connections through a shared private link
Shared private link is a premium feature that's billed by usage. When you set up
Azure AI Search makes outbound calls to other Azure PaaS resources in the following scenarios:
-+ Indexer connection requests to supported data sources
-+ Indexer (skillset) connections to Azure Storage for caching enrichments or writing to a knowledge store
++ Indexer or search engine connects to Azure OpenAI for text-to-vector embeddings++ Indexer connects to supported data sources++ Indexer (skillset) connections to Azure Storage for caching enrichments, debug session sate, or writing to a knowledge store + Encryption key requests to Azure Key Vault + Custom skill requests to Azure Functions or similar resource
-In service-to-service communications, Azure AI Search typically sends a request over a public internet connection. However, if your data, key vault, or function should be accessed through a [private endpoint](../private-link/private-endpoint-overview.md), you must create a *shared private link*.
+Shared private links only work for Azure-to-Azure connections. If you're connecting to OpenAI or another external model, the connection must be over the public internet.
+
+Shared private links are for operations and data accessed through a [private endpoint](../private-link/private-endpoint-overview.md) for Azure resources or clients that run in an Azure virtual network.
A shared private link is:
There are two scenarios for using [Azure Private Link](../private-link/private-l
+ Scenario two: [configure search for a private *inbound* connection](service-create-private-endpoint.md) from clients that run in a virtual network.
+Scenario one is covered in this article.
+ While both scenarios have a dependency on Azure Private Link, they are independent. You can create a shared private link without having to configure your own search service for a private endpoint. ### Limitations When evaluating shared private links for your scenario, remember these constraints.
-+ Several of the resource types used in a shared private link are in preview. If you're connecting to a preview resource (Azure Database for MySQL, Azure Functions, or Azure SQL Managed Instance), use a preview version of the Management REST API to create the shared private link. These versions include `2020-08-01-preview` or `2021-04-01-preview`.
++ Several of the resource types used in a shared private link are in preview. If you're connecting to a preview resource (Azure Database for MySQL, Azure Functions, or Azure SQL Managed Instance), use a preview version of the Management REST API to create the shared private link. These versions include `2020-08-01-preview`, `2021-04-01-preview`, and `2024-03-01-preview`. + Indexer execution must use the private execution environment that's specific to your search service. Private endpoint connections aren't supported from the multitenant environment. The configuration setting for this requirement is covered in this article.
When evaluating shared private links for your scenario, remember these constrain
+ An Azure AI Search at the Basic tier or higher. If you're using [AI enrichment](cognitive-search-concept-intro.md) and skillsets, the tier must be Standard 2 (S2) or higher. See [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits) for details.
-+ An Azure PaaS resource from the following list of supported resource types, configured to run in a virtual network.
++ An Azure PaaS resource from the following list of [supported resource types](#supported-resource-types), configured to run in a virtual network.+ + Permissions on both Azure AI Search and the data source:
A `202 Accepted` response is returned on success. The process of creating an out
## 2 - Approve the private endpoint connection
-Approval of the private endpoint connection is granted on the Azure PaaS side. If the service consumer has a role assignment on the service provider resource, the approval will be automatic. Otherwise, manual approval is required. For details, see [Manage Azure private endpoints](/azure/private-link/manage-private-endpoint).
+Approval of the private endpoint connection is granted on the Azure PaaS side. Explicit approval by the resource owner is required. The following steps cover approval using the Azure portal, but here are some links to approve the connection programmatically from the Azure PaaS side:
+++ On Azure Storage, use [Private Endpoint Connections - Put](/rest/api/storagerp/private-endpoint-connections/put)++ On Azure Cosmos DB, use [Private Endpoint Connections - Create Or Update](/rest/api/cosmos-db-resource-provider/private-endpoint-connections/create-or-update)
-This section assumes manual approval and the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2023-03-15/private-endpoint-connections) are two examples.
+Using the Azure portal, perform the following steps:
-1. In the Azure portal, open the **Networking** page of the Azure PaaS resource.[text](https://ms.portal.azure.com/#blade%2FHubsExtension%2FResourceMenuBlade%2Fid%2F%2Fsubscriptions%2Fa5b1ca8b-bab3-4c26-aebe-4cf7ec4791a0%2FresourceGroups%2Ftest-private-endpoint%2Fproviders%2FMicrosoft.Network%2FprivateEndpoints%2Ftest-private-endpoint)
+1. Open the **Networking** page of the Azure PaaS resource.[text](https://ms.portal.azure.com/#blade%2FHubsExtension%2FResourceMenuBlade%2Fid%2F%2Fsubscriptions%2Fa5b1ca8b-bab3-4c26-aebe-4cf7ec4791a0%2FresourceGroups%2Ftest-private-endpoint%2Fproviders%2FMicrosoft.Network%2FprivateEndpoints%2Ftest-private-endpoint)
1. Find the section that lists the private endpoint connections. The following example is for a storage account.
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
- devx-track-azurecli - ignite-2023 Previously updated : 02/21/2024 Last updated : 04/05/2024 # Manage your Azure AI Search service with the Azure CLI
Last updated 02/21/2024
> * [Azure CLI](search-manage-azure-cli.md) > * [REST API](search-manage-rest.md)
-You can run Azure CLI commands and scripts on Windows, macOS, Linux, or in [Azure Cloud Shell](../cloud-shell/overview.md) to create and configure Azure AI Search. The [**az search**](/cli/azure/search) module extends the [Azure CLI](/cli/) with full parity to the [Search Management REST APIs](/rest/api/searchmanagement) and the ability to perform the following tasks:
+You can run Azure CLI commands and scripts on Windows, macOS, Linux, or in Azure Cloud Shell to create and configure Azure AI Search.
+
+Use the [**az search module**](/cli/azure/search) to perform the following tasks:
> [!div class="checklist"]
-> * [List search services in a subscription](#list-search-services)
+> * [List search services in a subscription](#list-services-in-a-subscription)
> * [Return service information](#get-search-service-information) > * [Create or delete a service](#create-or-delete-a-service) > * [Create a service with a private endpoint](#create-a-service-with-a-private-endpoint)
Preview administration features are typically not available in the **az search**
Azure CLI versions are [listed on GitHub](https://github.com/Azure/azure-cli/releases).
-<a name="list-search-services"></a>
+The [**az search**](/cli/azure/search) module extends the [Azure CLI](/cli/) with full parity to the stable versions of the [Search Management REST APIs](/rest/api/searchmanagement).
## List services in a subscription
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
Title: PowerShell scripts using `Az.Search` module
+ Title: PowerShell scripts using Azure Search PowerShell module
description: Create and configure an Azure AI Search service with PowerShell. You can scale a service up or down, manage admin and query api-keys, and query for system information.
ms.devlang: powershell Previously updated : 02/21/2024 Last updated : 04/05/2024 - devx-track-azurepowershell - ignite-2023
> * [Azure CLI](search-manage-azure-cli.md) > * [REST API](search-manage-rest.md)
-You can run PowerShell cmdlets and scripts on Windows, Linux, or in [Azure Cloud Shell](../cloud-shell/overview.md) to create and configure Azure AI Search. The **Az.Search** module extends [Azure PowerShell](/powershell/) with full parity to the [Search Management REST APIs](/rest/api/searchmanagement) and the ability to perform the following tasks:
+You can run PowerShell cmdlets and scripts on Windows, Linux, or in Azure Cloud Shell to create and configure Azure AI Search.
+
+Use the [**Az.Search** module](/powershell/module/az.search/) to perform the following tasks:
> [!div class="checklist"] > * [List search services in a subscription](#list-search-services)
You can't use tools or APIs to transfer content, such as an index, from one serv
Preview administration features are typically not available in the **Az.Search** module. If you want to use a preview feature, [use the Management REST API](search-manage-rest.md) and a preview API version.
+The [**Az.Search** module](/powershell/module/az.search/) extends [Azure PowerShell](/powershell/) with full parity to the stable versions of the [Search Management REST APIs](/rest/api/searchmanagement).
+ <a name="check-versions-and-load"></a> ## Check versions and load modules
search Service Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-create-private-endpoint.md
- ignite-2023 Previously updated : 01/10/2024 Last updated : 04/03/2024 # Create a Private Endpoint for a secure connection to Azure AI Search
-In this article, learn how to secure an Azure AI Search service so that it can't be accessed over a public internet connection:
+In this article, learn how to configure a private connection to Azure AI Search so that it admits requests from clients in a virtual network instead of over a public internet connection:
+ [Create an Azure virtual network](#create-the-virtual-network) (or use an existing one) + [Configure a search service to use a private endpoint](#create-a-search-service-with-a-private-endpoint) + [Create an Azure virtual machine in the same virtual network](#create-a-virtual-machine) + [Test using a browser session on the virtual machine](#connect-to-the-vm)
+Other Azure resources that might privately connect to Azure AI Search include Azure OpenAI for "use your own data" scenarios. Azure OpenAI Studio doesn't run in a virtual network, but it can be configured on the backend to send requests over the Microsoft backbone network. Configuration for this traffic pattern is enabled by Microsoft when your request is submitted and approved. For this scenario:
+++ Follow the instructions in this article to set up the private endpoint.++ [Submit a request](/azure/ai-services/openai/how-to/use-your-data-securely#disable-public-network-access-1) for Azure OpenAI Studio to connect using your private endpoint.++ Optionally, [disable public network access](#disable-public-network-access) if connections should only originate from clients in virtual network or from Azure OpenAI over a private endpoint connection.+
+## Key points about private endpoints
+ Private endpoints are provided by [Azure Private Link](../private-link/private-link-overview.md), as a separate billable service. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/).
-You can create a private endpoint for a search service in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API version](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search).
+Once a search service has a private endpoint, portal access to that service must be initiated from a browser session on a virtual machine inside the virtual network. See [this step](#portal-access-private-search-service) for details.
-> [!NOTE]
-> Once a search service has a private endpoint, portal access to that service must be initiated from a browser session on a virtual machine inside the virtual network. See [this step](#portal-access-private-search-service) for details.
+You can create a private endpoint for a search service in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API version](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search).
-## Why use a Private Endpoint for secure access?
+## Why use a private endpoint?
[Private Endpoints](../private-link/private-endpoint-overview.md) for Azure AI Search allow a client on a virtual network to securely access data in a search index over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the [virtual network address space](../virtual-network/ip-services/private-ip-addresses.md) for your search service. Network traffic between the client and the search service traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. For a list of other PaaS services that support Private Link, check the [availability section](../private-link/private-link-overview.md#availability) in the product documentation.
To work around this restriction, connect to Azure portal from a browser on a vir
1. On a virtual machine in your virtual network, open a browser and sign in to the Azure portal. The portal will use the private endpoint attached to the virtual machine to connect to your search service.
+## Disable public network access
+
+You can lock down a search service to prevent it from admitting any request from the public internet. You can use the Azure portal for this step.
+
+1. In the Azure portal, on the leftmost pane of your search service page, select **Networking**.
+
+1. Select **Disabled** on the **Firewalls and virtual networks** tab.
+
+You can also use the [Azure CLI](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update&preserve-view=true), [Azure PowerShell](/powershell/module/az.search/set-azsearchservice), or the [Management REST API](/rest/api/searchmanagement/services/update), setting `public-access` or `public-network-access` to `disabled`.
+ ## Clean up resources When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money.
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
MSAL also maintains a token cache and refreshes tokens for you when they're clos
| **SDL Phase** | Build | | **Applicable Technologies** | Generic, C#, Node.JS, | | **Attributes** | N/A, Gateway choice - Azure IoT Hub |
-| **References** | N/A, [Azure IoT hub with .NET](../../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp), [Getting Started with IoT hub and Node JS](../../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs), [Securing IoT with SAS and certificates](../../iot-hub/iot-hub-dev-guide-sas.md), [Git repository](https://github.com/Azure/azure-iot-sdks/) |
+| **References** | N/A, [Azure IoT hub with .NET](../../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-csharp), [Getting Started with IoT hub and Node JS](../../iot/tutorial-send-telemetry-iot-hub.md?pivots=programming-language-nodejs), [Securing IoT with SAS and certificates](../../iot-hub/iot-hub-dev-guide-sas.md), [Git repository](https://github.com/Azure/azure-iot-sdks/) |
| **Steps** | <ul><li>**Generic:** Authenticate the device using Transport Layer Security (TLS) or IPSec. Infrastructure should support using pre-shared key (PSK) on those devices that cannot handle full asymmetric cryptography. Leverage Microsoft Entra ID, Oauth.</li><li>**C#:** When creating a DeviceClient instance, by default, the Create method creates a DeviceClient instance that uses the AMQP protocol to communicate with IoT Hub. To use the HTTPS protocol, use the override of the Create method that enables you to specify the protocol. If you use the HTTPS protocol, you should also add the `Microsoft.AspNet.WebApi.Client` NuGet package to your project to include the `System.Net.Http.Formatting` namespace.</li></ul>| ### Example
security Threat Modeling Tool Releases 73209279 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73209279.md
Title: Microsoft Threat Modeling Tool release 09/27/2022 - Azure description: Documenting the release notes for the threat modeling tool release 7.3.20927.9.--++ Last updated 09/27/2022
security Threat Modeling Tool Releases 73211082 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73211082.md
Title: Microsoft Threat Modeling Tool release 11/08/2022 - Azure description: Documenting the release notes for the threat modeling tool release 7.3.21108.2.--++ Last updated 11/08/2022
security Threat Modeling Tool Releases 73306305 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73306305.md
Title: Microsoft Threat Modeling Tool release 06/30/2023 - Azure description: Documenting the release notes for the threat modeling tool release 7.3.30630.5.--++ Last updated 06/30/2023
security Threat Modeling Tool Releases 73308291 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73308291.md
Title: Microsoft Threat Modeling Tool release 08/30/2023 - Azure description: Documenting the release notes for the threat modeling tool release 7.3.30829.1.--++ Last updated 08/30/2023
security Threat Modeling Tool Releases 73309251 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73309251.md
Title: Microsoft Threat Modeling Tool release 09/25/2023 - Azure description: Documenting the release notes for the threat modeling tool release 7.3.30925.1.--++ Last updated 09/25/2023
security Threat Modeling Tool Releases 73310263 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73310263.md
Title: Microsoft Threat Modeling Tool release 10/26/2023 - Azure description: Documenting the release notes for the threat modeling tool release 7.3.31026.3.--++ Last updated 10/26/2023
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Data connectors are available as part of the following offerings:
## Amazon Web Services - [Amazon Web Services](data-connectors/amazon-web-services.md)-- [Amazon Web Services S3 (preview)](data-connectors/amazon-web-services-s3.md)
+- [Amazon Web Services S3](data-connectors/amazon-web-services-s3.md)
## Apache
Data connectors are available as part of the following offerings:
- [Threat intelligence - TAXII](data-connectors/threat-intelligence-taxii.md) - [Threat Intelligence Platforms](data-connectors/threat-intelligence-platforms.md) - [Threat Intelligence Upload Indicators API (Preview)](data-connectors/threat-intelligence-upload-indicators-api.md)-- [Windows DNS Events via AMA (Preview)](data-connectors/windows-dns-events-via-ama.md)
+- [Windows DNS Events via AMA](data-connectors/windows-dns-events-via-ama.md)
- [Windows Firewall](data-connectors/windows-firewall.md) - [Windows Forwarded Events](data-connectors/windows-forwarded-events.md) - [Windows Security Events via AMA](data-connectors/windows-security-events-via-ama.md)
sentinel Amazon Web Services S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/amazon-web-services-s3.md
Title: "Amazon Web Services S3 connector for Microsoft Sentinel (preview)"
+ Title: "Amazon Web Services S3 connector for Microsoft Sentinel"
description: "Learn how to install the connector Amazon Web Services S3 to connect your data source to Microsoft Sentinel."
-# Amazon Web Services S3 connector for Microsoft Sentinel (preview)
+# Amazon Web Services S3 connector for Microsoft Sentinel
This connector allows you to ingest AWS service logs, collected in AWS S3 buckets, to Microsoft Sentinel. The currently supported data types are: * AWS CloudTrail
sentinel Windows Dns Events Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/windows-dns-events-via-ama.md
Title: "Windows DNS Events via AMA (Preview) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Windows DNS Events via AMA (Preview) to connect your data source to Microsoft Sentinel."
+ Title: "Windows DNS Events via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector Windows DNS Events via AMA to connect your data source to Microsoft Sentinel."
Previously updated : 02/28/2023 Last updated : 04/04/2024
-# Windows DNS Events via AMA (Preview) connector for Microsoft Sentinel
+# Windows DNS Events via AMA connector for Microsoft Sentinel
The Windows DNS log connector allows you to easily filter and stream all analytics logs from your Windows DNS servers to your Microsoft Sentinel workspace using the Azure Monitoring agent (AMA). Having this data in Microsoft Sentinel helps you identify issues and security threats such as: - Trying to resolve malicious domain names.
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Previously updated : 02/11/2024 Last updated : 04/04/2024 # Microsoft Sentinel feature support for Azure commercial/other clouds
This article describes the features available in Microsoft Sentinel across diffe
|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet | |||||| |[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#x2705; |&#x2705; |&#10060; |
-|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Amazon Web Services S3](connect-aws.md?tabs=s3) |GA|&#x2705; |&#x2705; |&#10060; |
|[Microsoft Entra ID](connect-azure-active-directory.md) |GA |&#x2705; |&#x2705;|&#x2705; <sup>[1](#logsavailable)</sup> | |[Microsoft Entra ID Protection](connect-services-api-based.md) |GA |&#x2705;| &#x2705; |&#10060; | |[Azure Activity](data-connectors/azure-activity.md) |GA |&#x2705;| &#x2705;|&#x2705; |
This article describes the features available in Microsoft Sentinel across diffe
|[Cisco ASA](data-connectors/cisco-asa.md) |GA |&#x2705; |&#x2705;|&#x2705; | |[Codeless Connectors Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) |Public preview |&#x2705; |&#10060;|&#10060; | |[Common Event Format (CEF)](connect-common-event-format.md) |GA |&#x2705; |&#x2705;|&#x2705; |
-|[Common Event Format (CEF) via AMA (Preview)](connect-cef-ama.md) |Public preview |&#x2705;|&#10060; |&#x2705; |
+|[Common Event Format (CEF) via AMA](connect-cef-syslog-ama.md) |GA |&#x2705;|&#x2705; |&#x2705; |
|[DNS](data-connectors/dns.md) |Public preview |&#x2705;| &#10060; |&#x2705; | |[GCP Pub/Sub Audit Logs](connect-google-cloud-platform.md) |Public preview |&#x2705; |&#x2705; |&#10060; | |[Microsoft Defender XDR](connect-microsoft-365-defender.md?tabs=MDE) |GA |&#x2705;| &#x2705;|&#10060; |
This article describes the features available in Microsoft Sentinel across diffe
|[Office 365](connect-services-api-based.md) |GA |&#x2705;|&#x2705; |&#x2705; | |[Security Events via Legacy Agent](connect-services-windows-based.md#log-analytics-agent-legacy) |GA |&#x2705; |&#x2705;|&#x2705; | |[Syslog](connect-syslog.md) |GA |&#x2705;| &#x2705;|&#x2705; |
+|[Syslog via AMA](connect-cef-syslog-ama.md) |GA |&#x2705;| &#x2705;|&#x2705; |
|[Windows DNS Events via AMA](connect-dns-ama.md) |GA |&#x2705; |&#x2705;|&#x2705; | |[Windows Firewall](data-connectors/windows-firewall.md) |GA |&#x2705; |&#x2705;|&#x2705; | |[Windows Forwarded Events](connect-services-windows-based.md) |GA |&#x2705;|&#x2705; |&#x2705; |
sentinel Microsoft Sentinel Defender Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-sentinel-defender-portal.md
description: Learn about changes in the Microsoft Defender portal with the integ
Previously updated : 04/03/2024 Last updated : 04/04/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal
The following capabilities are only available in the Defender portal.
||| |Attack disruption for SAP | [Automatic attack disruption in the Microsoft Defender portal](/microsoft-365/security/defender/automatic-attack-disruption) | + ### Azure portal only The following capabilities are only available in the Azure portal. |Capability |Learn more | |||
-|Tasks | [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md) |
|Add entities to threat intelligence from incidents | [Add entity to threat indicators](add-entity-to-threat-intelligence.md) | | Automation | Some automation procedures are available only in the Azure portal. <br><br>Other automation procedures are the same in the Defender and Azure portals, but differ in the Azure portal between workspaces that are onboarded to the unified security operations platform and workspaces that aren't. <br><br>For more information, see [Automation with the unified security operations platform](automation.md#automation-with-the-unified-security-operations-platform). |
+| Hunt using bookmarks | [Bookmarks](/azure/sentinel/bookmarks) aren't supported in the advanced hunting experience in the Microsoft Defender portal. In the Defender portal, they are supported in the **Microsoft Sentinel > Threat management > Hunting**. |
+|Tasks | [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md) |
## Quick reference
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md
If you didn't use a watchlist template to create your file,
|Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. | |Upload file | Either drag and drop your data file, or select **Browse for files** and select the file to upload. | |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
-
+
+ >[!NOTE]
+ > If your CSV file is greater than 3.8 MB, you need to use the instructions for [Create a large watchlist from file in Azure Storage](#create-a-large-watchlist-from-file-in-azure-storage-preview).
1. Select **Next: Review and Create**.
service-fabric How To Managed Cluster Application Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-secrets.md
Previously updated : 07/11/2022 Last updated : 04/08/2024 # Deploy application secrets to a Service Fabric managed cluster
For managed clusters you'll need three values, two from Azure Key Vault, and one
Parameters: * `Source Vault`: This is the * e.g.: /subscriptions/{subscriptionid}/resourceGroups/myrg1/providers/Microsoft.KeyVault/vaults/mykeyvault1
-* `Certificate URL`: This is the full object identifier and is case-insensitive and immutable
+* `Certificate URL`: This is the full Key Vault secret identifier and is case-insensitive and immutable
* https://mykeyvault1.vault.azure.net/secrets/{secretname}/{secret-version} * `Certificate Store`: This is the local certificate store on the nodes where the cert will be placed * certificate store name on the nodes, e.g.: "MY"
service-fabric Monitor Service Fabric Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/monitor-service-fabric-reference.md
+
+ Title: Monitoring data reference for Azure Service Fabric
+description: This article contains important reference material you need when you monitor Service Fabric.
Last updated : 03/26/2024+++++++
+# Azure Service Fabric monitoring data reference
++
+See [Monitor Service Fabric](monitor-service-fabric.md) for details on the data you can collect for Azure Service Fabric and how to use it.
+
+Azure Monitor doesn't collect any platform metrics or resource logs for Service Fabric. You can monitor and collect:
+
+- Service Fabric system, node, and application events. For the full event listing, see [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md).
+- Windows performance counters on nodes and applications. For the list of performance counters, see [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).
+- Cluster, node, and system service health data. You can use the [FabricClient.HealthManager property](/dotnet/api/system.fabric.fabricclient.healthmanager) to get the health client to use for health related operations, like report health or get entity health.
+- Metrics for the guest operating system (OS) that runs on a cluster node, through one or more agents that run on the guest OS.
+
+ Guest OS metrics include performance counters that track guest CPU percentage or memory usage, which are frequently used for autoscaling or alerting. You can use the agent to send guest OS metrics to Azure Monitor Logs, where you can query them by using Log Analytics.
+
+ > [!NOTE]
+ > The Azure Monitor agent replaces the previously-used Azure Diagnostics extension and Log Analytics agent. For more information, see [Overview of Azure Monitor agents](/azure/azure-monitor/agents/agents-overview).
++
+### Service Fabric Clusters
+Microsoft.ServiceFabric/clusters
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns)
++
+- [Microsoft.ServiceFabric resource provider operations](/azure/role-based-access-control/permissions/compute#microsoftservicefabric)
+
+## Related content
+
+- See [Monitor Service Fabric](monitor-service-fabric.md) for a description of monitoring Service Fabric.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md) for the list of Service Fabric system, node, and application events.
+- See [Performance metrics](service-fabric-diagnostics-event-generation-perf.md) for the list of Windows performance counters on nodes and applications.
service-fabric Monitor Service Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/monitor-service-fabric.md
+
+ Title: Monitor Azure Service Fabric
+description: Start here to learn how to monitor Service Fabric.
Last updated : 03/26/2024+++++++
+# Monitor Azure Service Fabric
++
+## Azure Service Fabric monitoring
+
+Azure Service Fabric has the following layers that you can monitor:
+
+- Service health and performance counters for the service *infrastructure*. For more information, see [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).
+- Client metrics, logs, and events for the *platform* or *cluster* nodes, including container metrics. The metrics and logs are different for Linux or Windows nodes. For more information, see [Monitor the cluster](service-fabric-diagnostics-event-generation-infra.md).
+- The *applications* that run on the nodes. You can monitor applications with Application Insights key or SDK, EventStore, or ASP.NET Core logging. For more information, see [Application logging](service-fabric-diagnostics-event-generation-app.md).
+
+You can monitor how your applications are used, the actions taken by the Service Fabric platform, your resource utilization with performance counters, and the overall health of your cluster. [Azure Monitor logs](service-fabric-diagnostics-event-analysis-oms.md) and [Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md) offer built-in integration with Service Fabric.
+
+- For an overview of monitoring and diagnostics for Service Fabric infrastructure, platform, and applications, see [Monitoring and diagnostics for Azure Service Fabric](service-fabric-diagnostics-overview.md).
+- For a tutorial that shows how to view Service Fabric events and health reports, query the EventStore APIs, and monitor performance counters, see [Tutorial: Monitor a Service Fabric cluster in Azure](service-fabric-tutorial-monitor-cluster.md).
+
+### Service Fabric Explorer
+
+[Service Fabric Explorer](service-fabric-visualizing-your-cluster.md), a desktop application for Windows, macOS, and Linux, is an open-source tool for inspecting and managing Azure Service Fabric clusters. To enable automation, every action that can be taken through Service Fabric Explorer can also be done through PowerShell or a REST API.
+
+### EventStore
+
+[EventStore](service-fabric-diagnostics-eventstore.md) is a feature that shows Service Fabric platform events in Service Fabric Explorer and programmatically through the [Service Fabric Client Library](/dotnet/api/overview/azure/service-fabric#client-library) REST API. You can see a snapshot view of what's going on in your cluster for each node, service, and application, and query based on the time of the event.
+
+The EventStore APIs are available only for Windows clusters running on Azure. On Windows machines, these events are fed into the Event Log, so you can see Service Fabric Events in Event Viewer.
+
+### Application Insights
+
+Application Insights integrates with Service Fabric to provide Service Fabric specific metrics and tooling experiences for Visual Studio and Azure portal. Application Insights provides a comprehensive out-of-the-box logging experience. For more information, see [Event analysis and visualization with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
++
+For more information about the resource types for Azure Service Fabric, see [Service Fabric monitoring data reference](monitor-service-fabric-reference.md).
++++
+### Performance counters
+
+Service Fabric system performance is usually measured through performance counters. These performance counters can come from various sources including the operating system, the .NET framework, or the Service Fabric platform itself. For a list of performance counters that should be collected at the infrastructure level, see [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).
+
+Service Fabric also provides a set of performance counters for the Reliable Services and Actors programming models. For more information, see [Monitoring for Reliable Service Remoting](service-fabric-reliable-serviceremoting-diagnostics.md#performance-counters) and [Performance monitoring for Reliable Actors](service-fabric-reliable-actors-diagnostics.md#performance-counters).
+
+Azure Monitor Logs is recommended for monitoring cluster level events. After you configure the [Log Analytics agent](service-fabric-diagnostics-oms-agent.md) with your workspace, you can collect:
+
+- Performance metrics such as CPU Utilization.
+- .NET performance counters such as process level CPU utilization.
+- Service Fabric performance counters such as number of exceptions from a reliable service.
+- Container metrics such as CPU Utilization.
+
+### Guest OS metrics
+
+Metrics for the guest operating system (OS) that runs on Service Fabric cluster nodes must be collected through one or more agents that run on the guest OS. Guest OS metrics include performance counters that track guest CPU percentage or memory usage, both of which are frequently used for autoscaling or alerting.
+
+A best practice is to use and configure the Azure Monitor agent to send guest OS performance metrics through the custom metrics API into the Azure Monitor metrics database. You can send the guest OS metrics to Azure Monitor Logs by using the same agent. Then you can query on those metrics and logs by using Log Analytics.
+
+>[!NOTE]
+>The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analytics agent for guest OS routing. For more information, see [Overview of Azure Monitor agents](/azure/azure-monitor/agents/agents-overview).
++
+## Service Fabric logs and events
+
+Service Fabric can collect the following logs:
+
+- For Windows clusters, you can set up cluster monitoring with [Diagnostics Agent](service-fabric-diagnostics-event-aggregation-wad.md) and [Azure Monitor logs](service-fabric-diagnostics-oms-setup.md).
+- For Linux clusters, Azure Monitor Logs is also the recommended tool for Azure platform and infrastructure monitoring. Linux platform diagnostics require different configuration. For more information, see [Service Fabric Linux cluster events in Syslog](service-fabric-diagnostics-oms-syslog.md).
+- You can configure the Azure Monitor agent to send guest OS logs to Azure Monitor Logs, where you can query on them by using Log Analytics.
+- You can write Service Fabric container logs to *stdout* or *stderr* so they're available in Azure Monitor Logs.
+
+### Service Fabric events
+
+Service Fabric provides a comprehensive set of diagnostics events out of the box, which you can access through the EventStore or the operational event channel the platform exposes. These [Service Fabric events](service-fabric-diagnostics-events.md) illustrate actions done by the platform on different entities such as nodes, applications, services, and partitions. The same events are available on both Windows and Linux clusters.
+
+On Windows, Service Fabric events are available from a single Event Tracing for Windows (ETW) provider with a set of relevant `logLevelKeywordFilters` used to pick between Operational and Data & Messaging channels. On Linux, Service Fabric events come through LTTng and are put into one Azure Storage table, from where they can be filtered as needed. Diagnostics can be enabled at cluster creation time, which creates a Storage table where the events from these channels are sent.
+
+The events are sent through standard channels on both Windows and Linux and can be read by any monitoring tool that supports them, including Azure Monitor Logs. For more information, see [Azure Monitor logs integration](service-fabric-diagnostics-event-analysis-oms.md).
+
+### Health monitoring
+
+The Service Fabric platform includes a health model, which provides extensible health reporting for the status of entities in a cluster. Each node, application, service, partition, replica, or instance has a continuously updatable health status. Each time the health of a particular entity transitions, an event is also emitted. You can set up queries and alerts for health events in your monitoring tool, just like any other event.
+
+## Partner logging solutions
+
+Many events are written out through ETW providers and are extensible with other logging solutions. Examples are [Elastic Stack](https://www.elastic.co/products), especially if you're running a cluster in an offline environment, or [Dynatrace](https://www.dynatrace.com/). For a list of integrated partners, see [Azure Service Fabric Monitoring Partners](service-fabric-diagnostics-partners.md).
+++
+For an overview of common Service Fabric monitoring analytics scenarios, see [Diagnose common scenarios with Service Fabric](service-fabric-diagnostics-common-scenarios.md).
+++
+### Sample queries
+
+The following queries return Service Fabric Events, including actions on nodes. For other useful queries, see [Service Fabric Events](service-fabric-tutorial-monitor-cluster.md#view-service-fabric-events-including-actions-on-nodes).
+
+Return operational events recorded in the last hour:
+
+```kusto
+ServiceFabricOperationalEvent
+| where TimeGenerated > ago(1h)
+| join kind=leftouter ServiceFabricEvent on EventId
+| project EventId, EventName, TaskName, Computer, ApplicationName, EventMessage, TimeGenerated
+| sort by TimeGenerated
+```
+
+Return Health Reports with HealthState == 3 (Error), and extract more properties from the `EventMessage` field:
+
+```kusto
+ServiceFabricOperationalEvent
+| join kind=leftouter ServiceFabricEvent on EventId
+| extend HealthStateId = extract(@"HealthState=(\S+) ", 1, EventMessage, typeof(int))
+| where TaskName == 'HM' and HealthStateId == 3
+| extend SourceId = extract(@"SourceId=(\S+) ", 1, EventMessage, typeof(string)),
+ Property = extract(@"Property=(\S+) ", 1, EventMessage, typeof(string)),
+ HealthState = case(HealthStateId == 0, 'Invalid', HealthStateId == 1, 'Ok', HealthStateId == 2, 'Warning', HealthStateId == 3, 'Error', 'Unknown'),
+ TTL = extract(@"TTL=(\S+) ", 1, EventMessage, typeof(string)),
+ SequenceNumber = extract(@"SequenceNumber=(\S+) ", 1, EventMessage, typeof(string)),
+ Description = extract(@"Description='([\S\s, ^']+)' ", 1, EventMessage, typeof(string)),
+ RemoveWhenExpired = extract(@"RemoveWhenExpired=(\S+) ", 1, EventMessage, typeof(bool)),
+ SourceUTCTimestamp = extract(@"SourceUTCTimestamp=(\S+)", 1, EventMessage, typeof(datetime)),
+ ApplicationName = extract(@"ApplicationName=(\S+) ", 1, EventMessage, typeof(string)),
+ ServiceManifest = extract(@"ServiceManifest=(\S+) ", 1, EventMessage, typeof(string)),
+ InstanceId = extract(@"InstanceId=(\S+) ", 1, EventMessage, typeof(string)),
+ ServicePackageActivationId = extract(@"ServicePackageActivationId=(\S+) ", 1, EventMessage, typeof(string)),
+ NodeName = extract(@"NodeName=(\S+) ", 1, EventMessage, typeof(string)),
+ Partition = extract(@"Partition=(\S+) ", 1, EventMessage, typeof(string)),
+ StatelessInstance = extract(@"StatelessInstance=(\S+) ", 1, EventMessage, typeof(string)),
+ StatefulReplica = extract(@"StatefulReplica=(\S+) ", 1, EventMessage, typeof(string))
+```
+
+Get Service Fabric operational events aggregated with the specific service and node:
+
+```kusto
+ServiceFabricOperationalEvent
+| where ApplicationName != "" and ServiceName != ""
+| summarize AggregatedValue = count() by ApplicationName, ServiceName, Computer
+```
++
+### Service Fabric alert rules
+
+The following table lists some alert rules for Service Fabric. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Service Fabric monitoring data reference](monitor-service-fabric-reference.md) or the [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md#application-events).
+
+| Alert type | Condition | Description |
+|:|:|:|
+| Node event | Node goes down | ServiceFabricOperationalEvent where EventID >= 25622 and EventID <= 25626. These Event IDs are found in the [Node events reference](service-fabric-diagnostics-event-generation-operational.md#node-events). |
+| Application event | Application upgrade rollback | ServiceFabricOperationalEvent where EventID == 29623 or EventID == 29624. These Event IDs are found in the [Application events reference](service-fabric-diagnostics-event-generation-operational.md#application-events). |
+| Resource health | Upgrade service unreachable/unavailable | Cluster goes to UpgradeServiceUnreachable state. |
++
+## Related content
+
+- See [Service Fabric monitoring data reference](monitor-service-fabric-reference.md) for a reference of the metrics, logs, and other important values created for Service Fabric.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
+- See the [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md).
site-recovery How To Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-migrate-run-as-accounts-managed-identity.md
Previously updated : 01/31/2024 Last updated : 04/01/2024 # Migrate from a Run As account to Managed Identities
To link an existing managed identity Automation account to your Recovery Service
1. Go back to your recovery services vault. On the left pane, select the **Access control (IAM)** option. :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/add-mi-iam.png" alt-text="Screenshot that shows IAM settings page."::: 1. Select **Add** > **Add role assignment** > **Contributor** to open the **Add role assignment** page.
+ > [!NOTE]
+ > Once the automation account is set, you can change the role of the account from *Contributor* to *Site Recovery Contributor*.
1. On the **Add role assignment** page, ensure to select **Managed identity**. 1. Select the **Select members**. In the **Select managed identities** pane, do the following: 1. In the **Select** field, enter the name of the managed identity automation account.
storage Blob Inventory How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory-how-to.md
Enable blob inventory reports by adding a policy with one or more rules to your
5. In the **Add a rule** page, name your new rule.
-6. Choose a container.
+6. Choose the container that will store inventory reports.
7. Under **Object type to inventory**, choose whether to create a report for blobs or containers.
storage Data Lake Storage Directory File Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md
using Azure.Storage.Files.DataLake;
using Azure.Storage.Files.DataLake.Models; using Azure.Storage; using System.IO;- ``` + ## Authorize access and connect to data resources To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` object using Microsoft Entra ID, an account access key, or a shared access signature (SAS).
storage Data Lake Storage Directory File Acl Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
import com.azure.storage.file.datalake.models.*;
import com.azure.storage.file.datalake.options.*; ``` + ## Authorize access and connect to data resources To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/java/api/com.azure.storage.file.datalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` object using Microsoft Entra ID, an account access key, or a shared access signature (SAS).
storage Data Lake Storage Directory File Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-javascript.md
StorageSharedKeyCredential
} = require("@azure/storage-file-datalake"); ``` + ## Connect to the account To use the snippets in this article, you'll need to create a **DataLakeServiceClient** instance that represents the storage account.
storage Data Lake Storage Directory File Acl Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-python.md
from azure.storage.filedatalake import (
from azure.identity import DefaultAzureCredential ``` + ## Authorize access and connect to data resources To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` object using Microsoft Entra ID, an account access key, or a shared access signature (SAS).
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
You can't delete a locked time-based retention policy. You can extend the retent
### Retention policy audit logging
-Each container with a time-based retention policy enabled provides a policy audit log. The audit log includes up to seven time-based retention commands for locked time-based retention policies. Log entries include the user ID, command type, time stamps, and retention interval. The audit log is retained for the policy's lifetime in accordance with the SEC 17a-4(f) regulatory guidelines.
+Each container with a time-based retention policy enabled provides a policy audit log. The audit log includes up to seven time-based retention commands for locked time-based retention policies. Logging typically starts once you have locked the policy. Log entries include the user ID, command type, time stamps, and retention interval. The audit log is retained for the policy's lifetime in accordance with the SEC 17a-4(f) regulatory guidelines.
The Azure Activity log provides a more comprehensive log of all management service activities. Azure resource logs retain information about data operations. It's the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
The following sample rule filters the account to run the actions on objects that
### Rule filters
-Filters limit rule actions to a subset of blobs within the storage account. If more than one filter is defined, a logical `AND` runs on all filters. You can use a filter to specify which blobs to include. A filter provides no means to specify which blobs to exclude.
+Filters limit rule actions to a subset of blobs within the storage account. If more than one filter is defined, a logical `AND` runs on all filters. You can use a filter to specify which blobs to include. A filter provides no means to specify which blobs to exclude.
Filters include: | Filter name | Filter type | Notes | Is Required | |-|-|-|-|
-| blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only delete is supported for `appendBlob`, set tier isn't supported. | Yes |
-| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`.<br /><br />To match the container or blob name exactly, include the trailing forward slash ('/'), *e.g.*, `sample-container/` or `sample-container/blob1/`. To match the container or blob name pattern, omit the trailing forward slash, *e.g.*, `sample-container` or `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. Prefix strings don't support wildcard matching. Characters such as `*` and `?` are treated as string literals. | No |
+| blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only the Delete action is supported for `appendBlob`; Set Tier isn't supported. | Yes |
+| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...`, specify the **prefixMatch** as `sample-container/blob1`. This filter will match all blobs in *sample-container* whose names begin with *blob1*.<br /><br />. | If you don't define **prefixMatch**, the rule applies to all blobs within the storage account. Prefix strings don't support wildcard matching. Characters such as `*` and `?` are treated as string literals. | No |
| blobIndexMatch | An array of dictionary values consisting of blob index tag key and value conditions to be matched. Each rule can define up to 10 blob index tag condition. For example, if you want to match all blobs with `Project = Contoso` under `https://myaccount.blob.core.windows.net/` for a rule, the blobIndexMatch is `{"name": "Project","op": "==","value": "Contoso"}`. | If you don't define blobIndexMatch, the rule applies to all blobs within the storage account. | No | To learn more about the blob index feature together with known issues and limitations, see [Manage and find data on Azure Blob Storage with blob index](storage-manage-find-blobs.md).
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Point-in-time restore provides protection against accidental deletion or corruption by enabling you to restore block blob data to an earlier state. Point-in-time restore is useful in scenarios where a user or application accidentally deletes data or where an application error corrupts data. Point-in-time restore also enables testing scenarios that require reverting a data set to a known state before running further tests.
-Point-in-time restore is supported for general-purpose v2 storage accounts in the standard performance tier only. Only data in the hot and cool access tiers can be restored with point-in-time restore.
+Point-in-time restore is supported for general-purpose v2 storage accounts in the standard performance tier only. Only data in the hot and cool access tiers can be restored with point-in-time restore. Point-in-time restore is not yet supported in accounts that have a hierarchical namespace.
To learn how to enable point-in-time restore for a storage account, see [Perform a point-in-time restore on block blob data](point-in-time-restore-manage.md). + ## How point-in-time restore works To enable point-in-time restore, you create a management policy for the storage account and specify a retention period. During the retention period, you can restore block blobs from the present state to a state at a previous point in time.
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
The following clients have compatible algorithm support with SFTP for Azure Blob
- JSCH 0.1.54+ - curl 7.85.0+ - AIX<sup>1</sup>
+- MobaXterm v21.3
<sup>1</sup> Must set `AllowPKCS12KeystoreAutoOpen` option to `no`.
storage Elastic San Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-metrics.md
The following metrics are currently available for your Elastic SAN resource. You
|Metric|Definition| |||
-|**Used Capacity**|The total amount of storage used in your SAN resources. At the SAN level, it's the sum of capacity used by volume groups and volumes, in bytes. At the volume group level, it's the sum of the capacity used by all volumes in the volume group, in bytes|
+|**Used Capacity**|The total amount of storage used in your SAN resources. At the SAN level, it's the sum of capacity used by volume groups and volumes, in bytes.|
|**Transactions**|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests that produced errors.| |**E2E Latency**|The average end-to-end latency of successful requests made to the resource or the specified API operation.| |**Server Latency**|The average time used to process a successful request. This value doesn't include the network latency specified in **E2E Latency**. | |**Ingress**|The amount of ingress data. This number includes ingress to the resource from external clients as well as ingress within Azure. | |**Egress**|The amount of egress data. This number includes egress from the resource to external clients as well as egress within Azure. |
-By default, all metrics are shown at the SAN level. To view these metrics at either the volume group or volume level, select a filter on your selected metric to view your data on a specific volume group or volume.
+All metrics are shown at the elastic SAN level.
## Next steps
storage File Sync Choose Cloud Tiering Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-choose-cloud-tiering-policies.md
description: Details on what to keep in mind when choosing Azure File Sync cloud
Previously updated : 03/26/2024 Last updated : 04/08/2024
This article provides guidance on selecting and adjusting cloud tiering policies
- Cloud tiering isn't supported on the Windows system volume. -- You can still enable cloud tiering if you have a volume-level FSRM quota. Once an FSRM quota is set, the free space query APIs that get called automatically report the free space on the volume as per the quota setting.
+- If you're using File Server Resource Manager (FSRM) for quota management on server endpoints, we recommend applying the quotas at the folder level and not at the volume level. You can still enable cloud tiering if you have a volume-level FSRM quota. Once an FSRM quota is set, the free space query APIs that get called automatically report the free space on the volume as per the quota setting. However, when a hard quota is present on a volume root, the actual free space on the volume and the quota restricted space on the volume might not be the same. This could cause endless tiering if Azure File Sync thinks there isn't enough volume free space on the server endpoint.
### Minimum file size for a file to tier
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
description: Azure Files geo-redundancy for large file shares significantly impr
Previously updated : 04/01/2024 Last updated : 04/07/2024
Azure Files geo-redundancy for large file shares is generally available in the m
| Australia Southeast | GA | | Brazil South | Preview | | Brazil Southeast | Preview |
-| Canada Central | Preview |
-| Canada East | Preview |
+| Canada Central | GA |
+| Canada East | GA |
| Central India | Preview | | Central US | GA | | China East | GA |
Azure Files geo-redundancy for large file shares is generally available in the m
| North Europe | Preview | | Norway East | GA | | Norway West | GA |
-| South Africa North | Preview |
-| South Africa West | Preview |
+| South Africa North | GA |
+| South Africa West | GA |
| South Central US | Preview | | South India | Preview | | Southeast Asia | GA |
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md
Most workloads that require cloud file storage work well on either Azure Files o
| Encryption | All protocols<br><ul><li>Encryption at rest (AES-256) with customer or Microsoft-managed keys</li></ul><br>SMB<br><ul><li>Kerberos encryption using AES-256 (recommended) or RC4-HMAC</li><li>Encryption in transit</li></ul><br>REST<br><ul><li>Encryption in transit</li></ul><br> To learn more, see [Security and networking](files-nfs-protocol.md#security-and-networking). | All protocols<br><ul><li>Encryption at rest (AES-256) with Microsoft-managed keys</li><li>[Encryption at rest (AES-256) with customer-managed keys](../../azure-netapp-files/configure-customer-managed-keys.md)</li></ul><br>SMB<ul><li>Encryption in transit using AES-CCM (SMB 3.0) and AES-GCM (SMB 3.1.1)</li></ul><br>NFS 4.1<ul><li>Encryption in transit using Kerberos with AES-256</li></ul><br> To learn more, see [security FAQ](../../azure-netapp-files/faq-security.md). | | Access Options | <ul><li>Internet</li><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>Azure File Sync</li></ul><br> To learn more, see [network considerations](./storage-files-networking-overview.md). | <ul><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>[Virtual WAN](../../azure-netapp-files/configure-virtual-wan.md)</li><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[HPC Cache](../../hpc-cache/hpc-cache-overview.md)</li><li>[Standard Network Features](../../azure-netapp-files/azure-netapp-files-network-topologies.md#configurable-network-features)</li></ul><br> To learn more, see [network considerations](../../azure-netapp-files/azure-netapp-files-network-topologies.md). | | Data Protection | <ul><li>Incremental snapshots</li><li>File/directory user self-restore</li><li>Restore to new location</li><li>In-place revert</li><li>Share-level soft delete</li><li>Azure Backup integration</li></ul><br> To learn more, see [Azure Files enhances data protection capabilities](https://azure.microsoft.com/blog/azure-files-enhances-data-protection-capabilities/). | <ul><li>[Azure NetApp Files backup](../../azure-netapp-files/backup-introduction.md)</li><li>Snapshots (255/volume)</li><li>File/directory user self-restore</li><li>Restore to new volume</li><li>In-place revert</li><li>[Cross-region replication](../../azure-netapp-files/cross-region-replication-introduction.md)</li><li>[Cross-zone replication](../../azure-netapp-files/cross-zone-replication-introduction.md)</li></ul><br> To learn more, see [How Azure NetApp Files snapshots work](../../azure-netapp-files/snapshots-introduction.md). |
-| Migration Tools | <ul><li>Azure Data Box</li><li>Azure File Sync</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li></ul><br> To learn more, see [Migrate to Azure file shares](./storage-files-migration-overview.md). | <ul><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[CloudSync](https://cloud.netapp.com/cloud-sync-service), [XCP](https://xcp.netapp.com/)</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li><li>Application-based (for example, HSR, Data Guard, AOAG)</li></ul> |
+| Migration Tools | <ul><li>Azure Data Box</li><li>Azure File Sync</li><li>Azure Storage Mover</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li></ul><br> To learn more, see [Migrate to Azure file shares](./storage-files-migration-overview.md). | <ul><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[CloudSync](https://cloud.netapp.com/cloud-sync-service), [XCP](https://xcp.netapp.com/)</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li><li>Application-based (for example, HSR, Data Guard, AOAG)</li></ul> |
| Tiers | <ul><li>Premium</li><li>Transaction Optimized</li><li>Hot</li><li>Cool</li></ul><br> To learn more, see [storage tiers](./storage-files-planning.md#storage-tiers). | <ul><li>Ultra</li><li>Premium</li><li>Standard</li></ul><br> All tiers provide sub-ms minimum latency.<br><br> To learn more, see [Service Levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) and [Performance Considerations](../../azure-netapp-files/azure-netapp-files-performance-considerations.md). | | Pricing | [Azure Files Pricing](https://azure.microsoft.com/pricing/details/storage/files/) | [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/) |
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure file
Previously updated : 03/22/2024 Last updated : 04/05/2024
The following table indicates which targets are soft, representing the Microsoft
| Resource | Target | Hard limit | |-|--|| | Storage Sync Services per region | 100 Storage Sync Services | Yes |
+| Storage Sync Services per subscription | 15 Storage Sync Services | Yes |
| Sync groups per Storage Sync Service | 200 sync groups | Yes | | Registered servers per Storage Sync Service | 99 servers | Yes | | Private endpoints per Storage Sync Service | 100 private endpoints | Yes |
stream-analytics Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/functions-overview.md
Azure Stream Analytics supports the following four function types:
* Azure Machine Learning You can use these functions for scenarios such as real-time scoring using machine learning models, string manipulations, complex mathematical calculations, encoding and decoding data.
+> [!IMPORTANT]
+> C# user-defined functions for Azure Stream Analytics will be retired on September 30th 2024. After that date, it won't be possible to use the feature.
## Limitations
-User-defined functions are stateless, and the return value can only be a scalar value. You cannot call out to external REST endpoints from these user-defined functions, as it will likely impact performance of your job.
+User-defined functions are stateless, and the return value can only be a scalar value. You can't call out to external REST endpoints from these user-defined functions, as it will likely impact performance of your job.
-Azure Stream Analytics does not keep a record of all functions invocations and returned results. To guarantee repeatability - for example, re-running your job from older timestamp produces the same results again - do not to use functions such as `Date.GetData()` or `Math.random()`, as these functions do not return the same result for each invocation.
+Azure Stream Analytics doesn't keep a record of all functions invocations and returned results. To guarantee repeatability - for example, re-running your job from older timestamp produces the same results again - don't to use functions such as `Date.GetData()` or `Math.random()`, as these functions don't return the same result for each invocation.
## Resource logs
-Any runtime errors are considered fatal and are surfaced through activity and resource logs. It is recommended that your function handles all exceptions and errors and return a valid result to your query. This will prevent your job from going to a [Failed state](job-states.md).
+Any runtime errors are considered fatal and are surfaced through activity and resource logs. It's recommended that your function handles all exceptions and errors and return a valid result to your query. This will prevent your job from going to a [Failed state](job-states.md).
## Exception handling
-Any exception during data processing is considered a catastrophic failure when consuming data in Azure Stream Analytics. User-defined functions have a higher potential to throw exceptions and cause the processing to stop. To avoid this issue, use a *try-catch* block in JavaScript or C# to catch exceptions during code execution. Exceptions that are caught can be logged and treated without causing a system failure. You are encouraged to always wrap your custom code in a *try-catch* block to avoid throwing unexpected exceptions to the processing engine.
+Any exception during data processing is considered a catastrophic failure when consuming data in Azure Stream Analytics. User-defined functions have a higher potential to throw exceptions and cause the processing to stop. To avoid this issue, use a *try-catch* block in JavaScript or C# to catch exceptions during code execution. Exceptions that are caught can be logged and treated without causing a system failure. You're encouraged to always wrap your custom code in a *try-catch* block to avoid throwing unexpected exceptions to the processing engine.
## Next steps
stream-analytics Visual Studio Code Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/visual-studio-code-custom-deserializer.md
Last updated 01/21/2023
# Tutorial: Custom .NET deserializers for Azure Stream Analytics in Visual Studio Code (Preview)
+> [!IMPORTANT]
+> Custom .net deserializer for Azure Stream Analytics will be retired on September 30th 2024. After that date, it won't be possible to use the feature.
+ Azure Stream Analytics has built-in support for three data formats: JSON, CSV, and Avro as shown in this [doc](stream-analytics-parsing-json.md). With custom .NET deserializers, you can process data in other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for cloud jobs. This tutorial demonstrates how to create, test, and debug a custom .NET deserializer for an Azure Stream Analytics job using Visual Studio Code. You'll learn how to:
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> * Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes. > * Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns. > * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
-> * We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)).
+> * **We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md)).**
## Component versions
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> * Effective January 26, 2024, the Azure Synapse has stopped official support for Spark 3.1 Runtimes. > * Post January 26, 2024, we will not be addressing any support tickets related to Spark 3.1. There will be no release pipeline in place for bug or security fixes for Spark 3.1. Utilizing Spark 3.1 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns. > * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 3.1, but we will not provide any official support for it.
-> * We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)).
+> * **We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md))**.
## Component versions
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. > * End of Support announced runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment. > * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired and disabled as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace.
-> * We recommend that you upgrade your Apache Spark 3.2 workloads to version 3.3 at your earliest convenience.
+> * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
## Component versions
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
# Azure Synapse Runtime for Apache Spark 3.3 (GA) Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.3.
-## Component versions
+> [!TIP]
+> We strongly recommend proactively upgrading workloads to a more recent GA version of the runtime which currently is [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md).
+## Component versions
| Component | Version | | -- |--| | Apache Spark | 3.3.1 |
The following sections present the libraries included in Azure Synapse Runtime f
## Migration between Apache Spark versions - support
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md#migration-between-apache-spark-versionssupport).
synapse-analytics Apache Spark 34 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-34-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.4
-description: New runtime is in Public Preview. Try it and use Spark 3.4.1, Python 3.10, Delta Lake 2.4.
+description: New runtime is in GA stage. Try it and use Spark 3.4.1, Python 3.10, Delta Lake 2.4.
Last updated 11/17/2023
-# Azure Synapse Runtime for Apache Spark 3.4 (Public Preview)
+# Azure Synapse Runtime for Apache Spark 3.4 (GA)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.4. ## Component versions
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
## Libraries
-The following sections present the libraries included in Azure Synapse Runtime for Apache Spark 3.4 (Public Preview).
+To check the libraries included in Azure Synapse Runtime for Apache Spark 3.4 for Jav).
-### Scala and Java default libraries
-The following table lists all the default level packages for Java/Scala and their respective versions.
-
-| GroupID | ArtifactID | Version |
-|-||--|
-| com.aliyun | aliyun-java-sdk-core | 4.5.10 |
-| com.aliyun | aliyun-java-sdk-kms | 2.11.0 |
-| com.aliyun | aliyun-java-sdk-ram | 3.1.0 |
-| com.aliyun | aliyun-sdk-oss | 3.13.0 |
-| com.amazonaws | aws-java-sdk-bundle | 1.12.1026 |
-| com.chuusai | shapeless_2.12 | 2.3.7 |
-| com.clearspring.analytics | stream | 2.9.6 |
-| com.esotericsoftware | kryo-shaded | 4.0.2 |
-| com.esotericsoftware | minlog | 1.3.0 |
-| com.fasterxml.jackson | jackson-annotations | 2.13.4 |
-| com.fasterxml.jackson | jackson-core | 2.13.4 |
-| com.fasterxml.jackson | jackson-core-asl | 1.9.13 |
-| com.fasterxml.jackson | jackson-databind | 2.13.4.1 |
-| com.fasterxml.jackson | jackson-dataformat-cbor | 2.13.4 |
-| com.fasterxml.jackson | jackson-mapper-asl | 1.9.13 |
-| com.fasterxml.jackson | jackson-module-scala_2.12 | 2.13.4 |
-| com.github.joshelser | dropwizard-metrics-hadoop-metrics2-reporter | 0.1.2 |
-| com.github.luben | zstd-jni | 1.5.2-1 |
-| com.github.vowpalwabbit | vw-jni | 9.3.0 |
-| com.github.wendykierp | JTransforms | 3.1 |
-| com.google.code.findbugs | jsr305 | 3.0.0 |
-| com.google.code.gson | gson | 2.8.6 |
-| com.google.crypto.tink | tink | 1.6.1 |
-| com.google.flatbuffers | flatbuffers-java | 1.12.0 |
-| com.google.guava | guava | 14.0.1 |
-| com.google.protobuf | protobuf-java | 2.5.0 |
-| com.googlecode.json-simple | json-simple | 1.1.1 |
-| com.jcraft | jsch | 0.1.54 |
-| com.jolbox | bonecp | 0.8.0.RELEASE |
-| com.linkedin.isolation-forest | isolation-forest_3.2.0_2.12 | 2.0.8 |
-| com.microsoft.azure | azure-data-lake-store-sdk | 2.3.9 |
-| com.microsoft.azure | azure-eventhubs | 3.3.0 |
-| com.microsoft.azure | azure-eventhubs-spark_2.12 | 2.3.22 |
-| com.microsoft.azure | azure-keyvault-core | 1.0.0 |
-| com.microsoft.azure | azure-storage | 7.0.1 |
-| com.microsoft.azure | cosmos-analytics-spark-3.4.1-connector_2.12 | 1.8.10 |
-| com.microsoft.azure | qpid-proton-j-extensions | 1.2.4 |
-| com.microsoft.azure | synapseml_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-cognitive_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-core_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-deep-learning_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-internal_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-lightgbm_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-opencv_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-vw_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure.kusto | kusto-data | 3.2.1 |
-| com.microsoft.azure.kusto | kusto-ingest | 3.2.1 |
-| com.microsoft.azure.kusto | kusto-spark_3.0_2.12 | 3.1.16 |
-| com.microsoft.azure.kusto | spark-kusto-synapse-connector_3.1_2.12 | 1.3.3 |
-| com.microsoft.cognitiveservices.speech | client-jar-sdk | 1.14.0 |
-| com.microsoft.sqlserver | msslq-jdbc | 8.4.1.jre8 |
-| com.ning | compress-lzf | 1.1 |
-| com.sun.istack | istack-commons-runtime | 3.0.8 |
-| com.tdunning | json | 1.8 |
-| com.thoughtworks.paranamer | paranamer | 2.8 |
-| com.twitter | chill-java | 0.10.0 |
-| com.twitter | chill_2.12 | 0.10.0 |
-| com.typesafe | config | 1.3.4 |
-| com.univocity | univocity-parsers | 2.9.1 |
-| com.zaxxer | HikariCP | 2.5.1 |
-| commons-cli | commons-cli | 1.5.0 |
-| commons-codec | commons-codec | 1.15 |
-| commons-collections | commons-collections | 3.2.2 |
-| commons-dbcp | commons-dbcp | 1.4 |
-| commons-io | commons-io | 2.11.0 |
-| commons-lang | commons-lang | 2.6 |
-| commons-logging | commons-logging | 1.1.3 |
-| commons-pool | commons-pool | 1.5.4 |
-| dev.ludovic.netlib | arpack | 2.2.1 |
-| dev.ludovic.netlib | blas | 2.2.1 |
-| dev.ludovic.netlib | lapack | 2.2.1 |
-| io.airlift | aircompressor | 0.21 |
-| io.delta | delta-core_2.12 | 2.2.0.9 |
-| io.delta | delta-storage | 2.2.0.9 |
-| io.dropwizard.metrics | metrics-core | 4.2.7 |
-| io.dropwizard.metrics | metrics-graphite | 4.2.7 |
-| io.dropwizard.metrics | metrics-jmx | 4.2.7 |
-| io.dropwizard.metrics | metrics-json | 4.2.7 |
-| io.dropwizard.metrics | metrics-jvm | 4.2.7 |
-| io.github.resilience4j | resilience4j-core | 1.7.1 |
-| io.github.resilience4j | resilience4j-retry | 1.7.1 |
-| io.netty | netty-all | 4.1.74.Final |
-| io.netty | netty-buffer | 4.1.74.Final |
-| io.netty | netty-codec | 4.1.74.Final |
-| io.netty | netty-codec-http2 | 4.1.74.Final |
-| io.netty | netty-codec-http-4 | 4.1.74.Final |
-| io.netty | netty-codec-socks | 4.1.74.Final |
-| io.netty | netty-common | 4.1.74.Final |
-| io.netty | netty-handler | 4.1.74.Final |
-| io.netty | netty-resolver | 4.1.74.Final |
-| io.netty | netty-tcnative-classes | 2.0.48 |
-| io.netty | netty-transport | 4.1.74.Final |
-| io.netty | netty-transport-classes-epoll | 4.1.87.Final |
-| io.netty | netty-transport-classes-kqueue | 4.1.87.Final |
-| io.netty | netty-transport-native-epoll | 4.1.87.Final-linux-aarch_64 |
-| io.netty | netty-transport-native-epoll | 4.1.87.Final-linux-x86_64 |
-| io.netty | netty-transport-native-kqueue | 4.1.87.Final-osx-aarch_64 |
-| io.netty | netty-transport-native-kqueue | 4.1.87.Final-osx-x86_64 |
-| io.netty | netty-transport-native-unix-common | 4.1.87.Final |
-| io.opentracing | opentracing-api | 0.33.0 |
-| io.opentracing | opentracing-noop | 0.33.0 |
-| io.opentracing | opentracing-util | 0.33.0 |
-| io.spray | spray-json_2.12 | 1.3.5 |
-| io.vavr | vavr | 0.10.4 |
-| io.vavr | vavr-match | 0.10.4 |
-| jakarta.annotation | jakarta.annotation-api | 1.3.5 |
-| jakarta.inject | jakarta.inject | 2.6.1 |
-| jakarta.servlet | jakarta.servlet-api | 4.0.3 |
-| jakarta.validation-api | | 2.0.2 |
-| jakarta.ws.rs | jakarta.ws.rs-api | 2.1.6 |
-| jakarta.xml.bind | jakarta.xml.bind-api | 2.3.2 |
-| javax.activation | activation | 1.1.1 |
-| javax.jdo | jdo-api | 3.0.1 |
-| javax.transaction | jta | 1.1 |
-| javax.transaction | transaction-api | 1.1 |
-| javax.xml.bind | jaxb-api | 2.2.11 |
-| javolution | javolution | 5.5.1 |
-| jline | jline | 2.14.6 |
-| joda-time | joda-time | 2.10.13 |
-| mysql | mysql-connector-java | 8.0.18 |
-| net.razorvine | pickle | 1.2 |
-| net.sf.jpam | jpam | 1.1 |
-| net.sf.opencsv | opencsv | 2.3 |
-| net.sf.py4j | py4j | 0.10.9.5 |
-| net.sf.supercsv | super-csv | 2.2.0 |
-| net.sourceforge.f2j | arpack_combined_all | 0.1 |
-| org.antlr | ST4 | 4.0.4 |
-| org.antlr | antlr-runtime | 3.5.2 |
-| org.antlr | antlr4-runtime | 4.8 |
-| org.apache.arrow | arrow-format | 7.0.0 |
-| org.apache.arrow | arrow-memory-core | 7.0.0 |
-| org.apache.arrow | arrow-memory-netty | 7.0.0 |
-| org.apache.arrow | arrow-vector | 7.0.0 |
-| org.apache.avro | avro | 1.11.0 |
-| org.apache.avro | avro-ipc | 1.11.0 |
-| org.apache.avro | avro-mapred | 1.11.0 |
-| org.apache.commons | commons-collections4 | 4.4 |
-| org.apache.commons | commons-compress | 1.21 |
-| org.apache.commons | commons-crypto | 1.1.0 |
-| org.apache.commons | commons-lang3 | 3.12.0 |
-| org.apache.commons | commons-math3 | 3.6.1 |
-| org.apache.commons | commons-pool2 | 2.11.1 |
-| org.apache.commons | commons-text | 1.10.0 |
-| org.apache.curator | curator-client | 2.13.0 |
-| org.apache.curator | curator-framework | 2.13.0 |
-| org.apache.curator | curator-recipes | 2.13.0 |
-| org.apache.derby | derby | 10.14.2.0 |
-| org.apache.hadoop | hadoop-aliyun | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-annotations | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-aws | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-azure | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-azure-datalake | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-client-api | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-client-runtime | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-cloud-storage | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-openstack | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-shaded-guava | 1.1.1 |
-| org.apache.hadoop | hadoop-yarn-server-web-proxy | 3.3.3.5.2-106693326 |
-| org.apache.hive | hive-beeline | 2.3.9 |
-| org.apache.hive | hive-cli | 2.3.9 |
-| org.apache.hive | hive-common | 2.3.9 |
-| org.apache.hive | hive-exec | 2.3.9 |
-| org.apache.hive | hive-jdbc | 2.3.9 |
-| org.apache.hive | hive-llap-common | 2.3.9 |
-| org.apache.hive | hive-metastore | 2.3.9 |
-| org.apache.hive | hive-serde | 2.3.9 |
-| org.apache.hive | hive-service-rpc | 2.3.9 |
-| org.apache.hive | hive-shims-0.23 | 2.3.9 |
-| org.apache.hive | hive-shims | 2.3.9 |
-| org.apache.hive | hive-shims-common | 2.3.9 |
-| org.apache.hive | hive-shims-scheduler | 2.3.9 |
-| org.apache.hive | hive-storage-api | 2.7.2 |
-| org.apache.httpcomponents | httpclient | 4.5.13 |
-| org.apache.httpcomponents | httpcore | 4.4.14 |
-| org.apache.httpcomponents | httpmime | 4.5.13 |
-| org.apache.httpcomponents.client5 | httpclient5 | 5.1.3 |
-| org.apache.iceberg | delta-iceberg | 2.2.0.9 |
-| org.apache.ivy | ivy | 2.5.1 |
-| org.apache.kafka | kafka-clients | 2.8.1 |
-| org.apache.logging.log4j | log4j-1.2-api | 2.17.2 |
-| org.apache.logging.log4j | log4j-api | 2.17.2 |
-| org.apache.logging.log4j | log4j-core | 2.17.2 |
-| org.apache.logging.log4j | log4j-slf4j-impl | 2.17.2 |
-| org.apache.orc | orc-core | 1.7.6 |
-| org.apache.orc | orc-mapreduce | 1.7.6 |
-| org.apache.orc | orc-shims | 1.7.6 |
-| org.apache.parquet | parquet-column | 1.12.3 |
-| org.apache.parquet | parquet-common | 1.12.3 |
-| org.apache.parquet | parquet-encoding | 1.12.3 |
-| org.apache.parquet | parquet-format-structures | 1.12.3 |
-| org.apache.parquet | parquet-hadoop | 1.12.3 |
-| org.apache.parquet | parquet-jackson | 1.12.3 |
-| org.apache.qpid | proton-j | 0.33.8 |
-| org.apache.spark | spark-avro_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-catalyst_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-core_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-graphx_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-hadoop-cloud_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-hive_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-kvstore_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-launcher_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-mllib_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-mllib-local_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-network-common_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-network-shuffle_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-repl_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-sketch_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-sql_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-sql-kafka-0-10_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-streaming_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-streaming-kafka-0-10-assembly_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-tags_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-token-provider-kafka-0-10_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-unsafe_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-yarn_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.thrift | libfb303 | 0.9.3 |
-| org.apache.thrift | libthrift | 0.12.0 |
-| org.apache.velocity | velocity | 1.5 |
-| org.apache.xbean | xbean-asm9-shaded | 4.2 |
-| org.apache.yetus | audience-annotations | 0.5.0 |
-| org.apache.zookeeper | zookeeper | 3.6.2.5.2-106693326 |
-| org.apache.zookeeper | zookeeper-jute | 3.6.2.5.2-106693326 |
-| org.apache.zookeeper | zookeeper | 3.6.2.5.2-106693326 |
-| org.apache.zookeeper | zookeeper-jute | 3.6.2.5.2-106693326 |
-| org.apiguardian | apiguardian-api | 1.1.0 |
-| org.codehaus.janino | commons-compiler | 3.0.16 |
-| org.codehaus.janino | janino | 3.0.16 |
-| org.codehaus.jettison | jettison | 1.1 |
-| org.datanucleus | datanucleus-api-jdo | 4.2.4 |
-| org.datanucleus | datanucleus-core | 4.1.17 |
-| org.datanucleus | datanucleus-rdbms | 4.1.19 |
-| org.datanucleusjavax.jdo | | 3.2.0-m3 |
-| org.eclipse.jetty | jetty-util | 9.4.48.v20220622 |
-| org.eclipse.jetty | jetty-util-ajax | 9.4.48.v20220622 |
-| org.fusesource.leveldbjni | leveldbjni-all | 1.8 |
-| org.glassfish.hk2 | hk2-api | 2.6.1 |
-| org.glassfish.hk2 | hk2-locator | 2.6.1 |
-| org.glassfish.hk2 | hk2-utils | 2.6.1 |
-| org.glassfish.hk2 | osgi-resource-locator | 1.0.3 |
-| org.glassfish.hk2.external | aopalliance-repackaged | 2.6.1 |
-| org.glassfish.jaxb | jaxb-runtime | 2.3.2 |
-| org.glassfish.jersey.containers | jersey-container-servlet | 2.36 |
-| org.glassfish.jersey.containers | jersey-container-servlet-core | 2.36 |
-| org.glassfish.jersey.core | jersey-client | 2.36 |
-| org.glassfish.jersey.core | jersey-common | 2.36 |
-| org.glassfish.jersey.core | jersey-server | 2.36 |
-| org.glassfish.jersey.inject | jersey-hk2 | 2.36 |
-| org.ini4j | ini4j | 0.5.4 |
-| org.javassist | javassist | 3.25.0-GA |
-| org.javatuples | javatuples | 1.2 |
-| org.jdom | jdom2 | 2.0.6 |
-| org.jetbrains | annotations | 17.0.0 |
-| org.jodd | jodd-core | 3.5.2 |
-| org.json | json | 20210307 |
-| org.json4s | json4s-ast_2.12 | 3.7.0-M11 |
-| org.json4s | json4s-core_2.12 | 3.7.0-M11 |
-| org.json4s | json4s-jackson_2.12 | 3.7.0-M11 |
-| org.json4s | json4s-scalap_2.12 | 3.7.0-M11 |
-| org.junit.jupiter | junit-jupiter | 5.5.2 |
-| org.junit.jupiter | junit-jupiter-api | 5.5.2 |
-| org.junit.jupiter | junit-jupiter-engine | 5.5.2 |
-| org.junit.jupiter | junit-jupiter-params | 5.5.2 |
-| org.junit.platform | junit-platform-commons | 1.5.2 |
-| org.junit.platform | junit-platform-engine | 1.5.2 |
-| org.lz4 | lz4-java | 1.8.0 |
-| org.mlflow | mlfow-spark | 2.1.1 |
-| org.objenesis | objenesis | 3.2 |
-| org.openpnp | opencv | 3.2.0-1 |
-| org.opentest4j | opentest4j | 1.2.0 |
-| org.postgresql | postgresql | 42.2.9 |
-| org.roaringbitmap | RoaringBitmap | 0.9.25 |
-| org.roaringbitmap | shims | 0.9.25 |
-| org.rocksdb | rocksdbjni | 6.20.3 |
-| org.scalactic | scalactic_2.12 | 3.2.14 |
-| org.scala-lang | scala-compiler | 2.12.15 |
-| org.scala-lang | scala-library | 2.12.15 |
-| org.scala-lang | scala-reflect | 2.12.15 |
-| org.scala-lang.modules | scala-collection-compat_2.12 | 2.1.1 |
-| org.scala-lang.modules | scala-java8-compat_2.12 | 0.9.0 |
-| org.scala-lang.modules | scala-parser-combinators_2.12 | 1.1.2 |
-| org.scala-lang.modules | scala-xml_2.12 | 1.2.0 |
-| org.scalanlp | breeze-macros_2.12 | 1.2 |
-| org.scalanlp | breeze_2.12 | 1.2 |
-| org.slf4j | jcl-over-slf4j | 1.7.32 |
-| org.slf4j | jul-to-slf4j | 1.7.32 |
-| org.slf4j | slf4j-api | 1.7.32 |
-| org.threeten | threeten-extra | 1.5.0 |
-| org.tukaani | xz | 1.8 |
-| org.typelevel | algebra_2.12 | 2.0.1 |
-| org.typelevel | cats-kernel_2.12 | 2.1.1 |
-| org.typelevel | spire_2.12 | 0.17.0 |
-| org.typelevel | spire-macros_2.12 | 0.17.0 |
-| org.typelevel | spire-platform_2.12 | 0.17.0 |
-| org.typelevel | spire-util_2.12 | 0.17.0 |
-| org.wildfly.openssl | wildfly-openssl | 1.0.7.Final |
-| org.xerial.snappy | snappy-java | 1.1.8.4 |
-| oro | oro | 2.0.8 |
-| pl.edu.icm | JLargeArrays | 1.5 |
-| stax | stax-api | 1.0.1 |
-
-### Python libraries
-
-The Azure Synapse Runtime for Apache Spark 3.4 is currently in Public Preview. During this phase, the Python libraries experience significant updates. Additionally, please note that some machine learning capabilities aren't yet supported, such as the PREDICT method and Synapse ML.
-
-### R libraries
-
-The following table lists all the default level packages for R and their respective versions.
-
-| Library | Version | Library | Version | Library | Version |
-||--|--||||
-| _libgcc_mutex | 0.1 | r-caret | 6.0_94 | r-praise | 1.0.0 |
-| _openmp_mutex | 4.5 | r-cellranger | 1.1.0 | r-prettyunits | 1.2.0 |
-| _r-mutex | 1.0.1 | r-class | 7.3_22 | r-proc | 1.18.4 |
-| _r-xgboost-mutex | 2 | r-cli | 3.6.1 | r-processx | 3.8.2 |
-| aws-c-auth | 0.7.0 | r-clipr | 0.8.0 | r-prodlim | 2023.08.28 |
-| aws-c-cal | 0.6.0 | r-clock | 0.7.0 | r-profvis | 0.3.8 |
-| aws-c-common | 0.8.23 | r-codetools | 0.2_19 | r-progress | 1.2.2 |
-| aws-c-compression | 0.2.17 | r-collections | 0.3.7 | r-progressr | 0.14.0 |
-| aws-c-event-stream | 0.3.1 | r-colorspace | 2.1_0 | r-promises | 1.2.1 |
-| aws-c-http | 0.7.10 | r-commonmark | 1.9.0 | r-proxy | 0.4_27 |
-| aws-c-io | 0.13.27 | r-config | 0.3.2 | r-pryr | 0.1.6 |
-| aws-c-mqtt | 0.8.13 | r-conflicted | 1.2.0 | r-ps | 1.7.5 |
-| aws-c-s3 | 0.3.12 | r-coro | 1.0.3 | r-purrr | 1.0.2 |
-| aws-c-sdkutils | 0.1.11 | r-cpp11 | 0.4.6 | r-quantmod | 0.4.25 |
-| aws-checksums | 0.1.16 | r-crayon | 1.5.2 | r-r2d3 | 0.2.6 |
-| aws-crt-cpp | 0.20.2 | r-credentials | 2.0.1 | r-r6 | 2.5.1 |
-| aws-sdk-cpp | 1.10.57 | r-crosstalk | 1.2.0 | r-r6p | 0.3.0 |
-| binutils_impl_linux-64 | 2.4 | r-crul | 1.4.0 | r-ragg | 1.2.6 |
-| bwidget | 1.9.14 | r-curl | 5.1.0 | r-rappdirs | 0.3.3 |
-| bzip2 | 1.0.8 | r-data.table | 1.14.8 | r-rbokeh | 0.5.2 |
-| c-ares | 1.20.1 | r-dbi | 1.1.3 | r-rcmdcheck | 1.4.0 |
-| ca-certificates | 2023.7.22 | r-dbplyr | 2.3.4 | r-rcolorbrewer | 1.1_3 |
-| cairo | 1.18.0 | r-desc | 1.4.2 | r-rcpp | 1.0.11 |
-| cmake | 3.27.6 | r-devtools | 2.4.5 | r-reactable | 0.4.4 |
-| curl | 8.4.0 | r-diagram | 1.6.5 | r-reactr | 0.5.0 |
-| expat | 2.5.0 | r-dials | 1.2.0 | r-readr | 2.1.4 |
-| font-ttf-dejavu-sans-mono | 2.37 | r-dicedesign | 1.9 | r-readxl | 1.4.3 |
-| font-ttf-inconsolata | 3 | r-diffobj | 0.3.5 | r-recipes | 1.0.8 |
-| font-ttf-source-code-pro | 2.038 | r-digest | 0.6.33 | r-rematch | 2.0.0 |
-| font-ttf-ubuntu | 0.83 | r-downlit | 0.4.3 | r-rematch2 | 2.1.2 |
-| fontconfig | 2.14.2 | r-dplyr | 1.1.3 | r-remotes | 2.4.2.1 |
-| fonts-conda-ecosystem | 1 | r-dtplyr | 1.3.1 | r-reprex | 2.0.2 |
-| fonts-conda-forge | 1 | r-e1071 | 1.7_13 | r-reshape2 | 1.4.4 |
-| freetype | 2.12.1 | r-ellipsis | 0.3.2 | r-rjson | 0.2.21 |
-| fribidi | 1.0.10 | r-evaluate | 0.23 | r-rlang | 1.1.1 |
-| gcc_impl_linux-64 | 13.2.0 | r-fansi | 1.0.5 | r-rlist | 0.4.6.2 |
-| gettext | 0.21.1 | r-farver | 2.1.1 | r-rmarkdown | 2.22 |
-| gflags | 2.2.2 | r-fastmap | 1.1.1 | r-rodbc | 1.3_20 |
-| gfortran_impl_linux-64 | 13.2.0 | r-fontawesome | 0.5.2 | r-roxygen2 | 7.2.3 |
-| glog | 0.6.0 | r-forcats | 1.0.0 | r-rpart | 4.1.21 |
-| glpk | 5 | r-foreach | 1.5.2 | r-rprojroot | 2.0.3 |
-| gmp | 6.2.1 | r-forge | 0.2.0 | r-rsample | 1.2.0 |
-| graphite2 | 1.3.13 | r-fs | 1.6.3 | r-rstudioapi | 0.15.0 |
-| gsl | 2.7 | r-furrr | 0.3.1 | r-rversions | 2.1.2 |
-| gxx_impl_linux-64 | 13.2.0 | r-future | 1.33.0 | r-rvest | 1.0.3 |
-| harfbuzz | 8.2.1 | r-future.apply | 1.11.0 | r-sass | 0.4.7 |
-| icu | 73.2 | r-gargle | 1.5.2 | r-scales | 1.2.1 |
-| kernel-headers_linux-64 | 2.6.32 | r-generics | 0.1.3 | r-selectr | 0.4_2 |
-| keyutils | 1.6.1 | r-gert | 2.0.0 | r-sessioninfo | 1.2.2 |
-| krb5 | 1.21.2 | r-ggplot2 | 3.4.2 | r-shape | 1.4.6 |
-| ld_impl_linux-64 | 2.4 | r-gh | 1.4.0 | r-shiny | 1.7.5.1 |
-| lerc | 4.0.0 | r-gistr | 0.9.0 | r-slider | 0.3.1 |
-| libabseil | 20230125 | r-gitcreds | 0.1.2 | r-sourcetools | 0.1.7_1 |
-| libarrow | 12.0.0 | r-globals | 0.16.2 | r-sparklyr | 1.8.2 |
-| libblas | 3.9.0 | r-glue | 1.6.2 | r-squarem | 2021.1 |
-| libbrotlicommon | 1.0.9 | r-googledrive | 2.1.1 | r-stringi | 1.7.12 |
-| libbrotlidec | 1.0.9 | r-googlesheets4 | 1.1.1 | r-stringr | 1.5.0 |
-| libbrotlienc | 1.0.9 | r-gower | 1.0.1 | r-survival | 3.5_7 |
-| libcblas | 3.9.0 | r-gpfit | 1.0_8 | r-sys | 3.4.2 |
-| libcrc32c | 1.1.2 | r-gt | 0.9.0 | r-systemfonts | 1.0.5 |
-| libcurl | 8.4.0 | r-gtable | 0.3.4 | r-testthat | 3.2.0 |
-| libdeflate | 1.19 | r-gtsummary | 1.7.2 | r-textshaping | 0.3.7 |
-| libedit | 3.1.20191231 | r-hardhat | 1.3.0 | r-tibble | 3.2.1 |
-| libev | 4.33 | r-haven | 2.5.3 | r-tidymodels | 1.1.0 |
-| libevent | 2.1.12 | r-hexbin | 1.28.3 | r-tidyr | 1.3.0 |
-| libexpat | 2.5.0 | r-highcharter | 0.9.4 | r-tidyselect | 1.2.0 |
-| libffi | 3.4.2 | r-highr | 0.1 | r-tidyverse | 2.0.0 |
-| libgcc-devel_linux-64 | 13.2.0 | r-hms | 1.1.3 | r-timechange | 0.2.0 |
-| libgcc-ng | 13.2.0 | r-htmltools | 0.5.6.1 | r-timedate | 4022.108 |
-| libgfortran-ng | 13.2.0 | r-htmlwidgets | 1.6.2 | r-tinytex | 0.48 |
-| libgfortran5 | 13.2.0 | r-httpcode | 0.3.0 | r-torch | 0.11.0 |
-| libgit2 | 1.7.1 | r-httpuv | 1.6.12 | r-triebeard | 0.4.1 |
-| libglib | 2.78.0 | r-httr | 1.4.7 | r-ttr | 0.24.3 |
-| libgomp | 13.2.0 | r-httr2 | 0.2.3 | r-tune | 1.1.2 |
-| libgoogle-cloud | 2.12.0 | r-ids | 1.0.1 | r-tzdb | 0.4.0 |
-| libgrpc | 1.55.1 | r-igraph | 1.5.1 | r-urlchecker | 1.0.1 |
-| libiconv | 1.17 | r-infer | 1.0.5 | r-urltools | 1.7.3 |
-| libjpeg-turbo | 3.0.0 | r-ini | 0.3.1 | r-usethis | 2.2.2 |
-| liblapack | 3.9.0 | r-ipred | 0.9_14 | r-utf8 | 1.2.4 |
-| libnghttp2 | 1.55.1 | r-isoband | 0.2.7 | r-uuid | 1.1_1 |
-| libnuma | 2.0.16 | r-iterators | 1.0.14 | r-v8 | 4.4.0 |
-| libopenblas | 0.3.24 | r-jose | 1.2.0 | r-vctrs | 0.6.4 |
-| libpng | 1.6.39 | r-jquerylib | 0.1.4 | r-viridislite | 0.4.2 |
-| libprotobuf | 4.23.2 | r-jsonlite | 1.8.7 | r-vroom | 1.6.4 |
-| libsanitizer | 13.2.0 | r-juicyjuice | 0.1.0 | r-waldo | 0.5.1 |
-| libssh2 | 1.11.0 | r-kernsmooth | 2.23_22 | r-warp | 0.2.0 |
-| libstdcxx-devel_linux-64 | 13.2.0 | r-knitr | 1.45 | r-whisker | 0.4.1 |
-| libstdcxx-ng | 13.2.0 | r-labeling | 0.4.3 | r-withr | 2.5.2 |
-| libthrift | 0.18.1 | r-labelled | 2.12.0 | r-workflows | 1.1.3 |
-| libtiff | 4.6.0 | r-later | 1.3.1 | r-workflowsets | 1.0.1 |
-| libutf8proc | 2.8.0 | r-lattice | 0.22_5 | r-xfun | 0.41 |
-| libuuid | 2.38.1 | r-lava | 1.7.2.1 | r-xgboost | 1.7.4 |
-| libuv | 1.46.0 | r-lazyeval | 0.2.2 | r-xml | 3.99_0.14 |
-| libv8 | 8.9.83 | r-lhs | 1.1.6 | r-xml2 | 1.3.5 |
-| libwebp-base | 1.3.2 | r-lifecycle | 1.0.3 | r-xopen | 1.0.0 |
-| libxcb | 1.15 | r-lightgbm | 3.3.5 | r-xtable | 1.8_4 |
-| libxgboost | 1.7.4 | r-listenv | 0.9.0 | r-xts | 0.13.1 |
-| libxml2 | 2.11.5 | r-lobstr | 1.1.2 | r-yaml | 2.3.7 |
-| libzlib | 1.2.13 | r-lubridate | 1.9.3 | r-yardstick | 1.2.0 |
-| lz4-c | 1.9.4 | r-magrittr | 2.0.3 | r-zip | 2.3.0 |
-| make | 4.3 | r-maps | 3.4.1 | r-zoo | 1.8_12 |
-| ncurses | 6.4 | r-markdown | 1.11 | rdma-core | 28.9 |
-| openssl | 3.1.4 | r-mass | 7.3_60 | re2 | 2023.03.02 |
-| orc | 1.8.4 | r-matrix | 1.6_1.1 | readline | 8.2 |
-| pandoc | 2.19.2 | r-memoise | 2.0.1 | rhash | 1.4.4 |
-| pango | 1.50.14 | r-mgcv | 1.9_0 | s2n | 1.3.46 |
-| pcre2 | 10.4 | r-mime | 0.12 | sed | 4.8 |
-| pixman | 0.42.2 | r-miniui | 0.1.1.1 | snappy | 1.1.10 |
-| pthread-stubs | 0.4 | r-modeldata | 1.2.0 | sysroot_linux-64 | 2.12 |
-| r-arrow | 12.0.0 | r-modelenv | 0.1.1 | tk | 8.6.13 |
-| r-askpass | 1.2.0 | r-modelmetrics | 1.2.2.2 | tktable | 2.1 |
-| r-assertthat | 0.2.1 | r-modelr | 0.1.11 | ucx | 1.14.1 |
-| r-backports | 1.4.1 | r-munsell | 0.5.0 | unixodbc | 2.3.12 |
-| r-base | 4.2.3 | r-nlme | 3.1_163 | xorg-kbproto | 1.0.7 |
-| r-base64enc | 0.1_3 | r-nnet | 7.3_19 | xorg-libice | 1.1.1 |
-| r-bigd | 0.2.0 | r-numderiv | 2016.8_1.1 | xorg-libsm | 1.2.4 |
-| r-bit | 4.0.5 | r-openssl | 2.1.1 | xorg-libx11 | 1.8.7 |
-| r-bit64 | 4.0.5 | r-parallelly | 1.36.0 | xorg-libxau | 1.0.11 |
-| r-bitops | 1.0_7 | r-parsnip | 1.1.1 | xorg-libxdmcp | 1.1.3 |
-| r-blob | 1.2.4 | r-patchwork | 1.1.3 | xorg-libxext | 1.3.4 |
-| r-brew | 1.0_8 | r-pillar | 1.9.0 | xorg-libxrender | 0.9.11 |
-| r-brio | 1.1.3 | r-pkgbuild | 1.4.2 | xorg-libxt | 1.3.0 |
-| r-broom | 1.0.5 | r-pkgconfig | 2.0.3 | xorg-renderproto | 0.11.1 |
-| r-broom.helpers | 1.14.0 | r-pkgdown | 2.0.7 | xorg-xextproto | 7.3.0 |
-| r-bslib | 0.5.1 | r-pkgload | 1.3.3 | xorg-xproto | 7.0.31 |
-| r-cachem | 1.0.8 | r-plotly | 4.10.2 | xz | 5.2.6 |
-| r-callr | 3.7.3 | r-plyr | 1.8.9 | zlib | 1.2.13 |
-| | | | | zstd | 1.5.5 |
-
-## Migration between Apache Spark versions - support
-
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+## Related content
+- [Migration between Apache Spark versions - support](./apache-spark-version-support.md#migration-between-apache-spark-versionssupport)
+- [Synapse runtime for Apache Spark lifecycle and supportability](./runtime-for-apache-spark-lifecycle-and-supportability.md)
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
In this tutorial, you learn how to enable the Synapse Studio connector that's built in to Log Analytics. You can then collect and send Apache Spark application metrics and logs to your [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). Finally, you can use an Azure Monitor workbook to visualize the metrics and logs. > [!NOTE]
-> This feature is currently unavailable in the Spark 3.4 runtime but will be supported post-GA.
+> This feature is currently unavailable in the [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) but will be supported post-GA.
## Configure workspace information
synapse-analytics Apache Spark Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-concepts.md
Previously updated : 02/09/2023 Last updated : 04/02/2024 # Apache Spark in Azure Synapse Analytics Core Concepts
You can read how to create a Spark pool and see all their properties here [Get s
Spark instances are created when you connect to a Spark pool, create a session, and run a job. As multiple users may have access to a single Spark pool, a new Spark instance is created for each user that connects.
-When you submit a second job, if there's capacity in the pool, the existing Spark instance also has capacity. Then, the existing instance will process the job. Otherwise, if capacity is available at the pool level, then a new Spark instance will be created.
+When you submit a second job, if there's capacity in the pool, the existing Spark instance also has capacity. Then, the existing instance processes the job. Otherwise, if capacity is available at the pool level, a new Spark instance is created.
Billing for the instances starts when the Azure VM(s) starts. Billing for the Spark pool instances stops when pool instances change to terminating. For more information on how Azure VMs are started and deallocated, see [States and billing status of Azure Virtual Machines](/azure/virtual-machines/states-billing).
Billing for the instances starts when the Azure VM(s) starts. Billing for the Sp
- You create a Spark pool called SP1; it has a fixed cluster size of 20 medium nodes - You submit a notebook job, J1 that uses 10 nodes, a Spark instance, SI1 is created to process the job - You now submit another job, J2, that uses 10 nodes because there's still capacity in the pool and the instance, the J2, is processed by SI1-- If J2 had asked for 11 nodes, there wouldn't have been capacity in SP1 or SI1. In this case, if J2 comes from a notebook, then the job will be rejected; if J2 comes from a batch job, then it will be queued.
+- If J2 had asked for 11 nodes, there wouldn't have been capacity in SP1 or SI1. In this case, if J2 comes from a notebook, then the job is rejected; if J2 comes from a batch job, it is queued.
- Billing starts at the submission of notebook job J1. - The Spark pool is instantiated with 20 medium nodes, each with 8 vCores, and typically takes ~3 minutes to start. 20 x 8 = 160 vCores. - Depending on the exact Spark pool start-up time, idle timeout and the runtime of the two notebook jobs; the pool is likely to run for between 18 and 20 minutes (Spark pool instantiation time + notebook job runtime + idle timeout). - Assuming 20-minute runtime, 160 x 0.3 hours = 48 vCore hours.
- - Note: vCore hours are billed per second, vCore pricing varies by Azure region. For more information, see [Azure Synapse Pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing)
+ - Note: vCore hours are billed per minute and vCore pricing varies by Azure region. For more information, see [Azure Synapse Pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing)
### Example 2
Billing for the instances starts when the Azure VM(s) starts. Billing for the Sp
- At the submission of J2, the pool autoscales by adding another 10 medium nodes, and typically takes 4 minutes to autoscale. Adding 10 x 8, 80 vCores for a total of 160 vCores. - Depending on the Spark pool start-up time, runtime of the first notebook job J1, the time to scale-up the pool, runtime of the second notebook, and finally the idle timeout; the pool is likely to run between 22 and 24 minutes (Spark pool instantiation time + J1 notebook job runtime all at 80 vCores) + (Spark pool autoscale-up time + J2 notebook job runtime + idle timeout all at 160 vCores). - 80 vCores for 4 minutes + 160 vCores for 20 minutes = 58.67 vCore hours.
- - Note: vCore hours are billed per second, vCore pricing varies by Azure region. For more information, see [Azure Synapse Pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing)
+ - Note: vCore hours are billed per minute and vCore pricing varies by Azure region. For more information, see [Azure Synapse Pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing)
### Example 3
Billing for the instances starts when the Azure VM(s) starts. Billing for the Sp
- Another Spark pool SI2 is instantiated with 20 medium nodes, each with 8 vCores, and typically takes ~3 minutes to start. 20 x 8, 160 vCores - Depending on the exact Spark pool start-up time, the ide timeout and the runtime of the first notebook job; The SI2 pool is likely to run for between 18 and 20 minutes (Spark pool instantiation time + notebook job runtime + idle timeout). - Assuming the two pools run for 20 minutes each, 160 x .03 x 2 = 96 vCore hours.
- - Note: vCore hours are billed per second, vCore pricing varies by Azure region. For more information, see [Azure Synapse Pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing)
+ - Note: vCore hours are billed per minute and vCore pricing varies by Azure region. For more information, see [Azure Synapse Pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing)
## Quotas and resource constraints in Apache Spark for Azure Synapse
synapse-analytics Apache Spark External Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md
Last updated 02/15/2022
# Use external Hive Metastore for Synapse Spark Pool > [!NOTE]
-> External Hive metastores will no longer be supported in Spark 3.4 and subsequent versions in Synapse.
+> External Hive metastores will no longer be supported in [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) and subsequent versions in Synapse.
Azure Synapse Analytics allows Apache Spark pools in the same workspace to share a managed HMS (Hive Metastore) compatible metastore as their catalog. When customers want to persist the Hive catalog metadata outside of the workspace, and share catalog objects with other computational engines outside of the workspace, such as HDInsight and Azure Databricks, they can connect to an external Hive Metastore. In this article, you can learn how to connect Synapse Spark to an external Apache Hive Metastore.
try {
``` ## Configure Spark to use the external Hive Metastore
-After creating the linked service to the external Hive Metastore successfully, you need to setup a few Spark configurations to use the external Hive Metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
+After creating the linked service to the external Hive Metastore successfully, you need to set up a few Spark configurations to use the external Hive Metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
Here are the configurations and descriptions:
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The runtimes have the following advantages:
> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4 and Apache Spark 3.1. > * Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes. > * Effective January 26, 2024, Azure Synapse will discontinue official support for Spark 3.1 Runtimes.
-> * After these dates, we will not be addressing any support tickets related to Spark 2.4 or 3.1. There will be no release pipeline in place for bug or security fixes for Spark 2.4 and 3.1. Utilizing Spark 2.4 or 3.1 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
+> * After these dates, we will not be addressing any support tickets related to Spark 2.4 or 3.1. There will be no release pipeline in place for bug or security fixes for Spark 2.4 and 3.1. **Utilizing Spark 2.4 or 3.1 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.**
> [!TIP]
-> We strongly recommend proactively upgrading workloads to a more recent version of the runtime (for example, [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)). Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
+> We strongly recommend proactively upgrading workloads to a more recent GA version of the runtime (for example, [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md)). Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases.
-| Runtime name | Release date | Release stage | End of Support announcement date | End of Support effective date |
-| | | | | |
-| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | Public Preview | | |
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | Q1 2025 |
+| Runtime name | Release date | Release stage | End of Support announcement date | End of Support effective date |
+| | || | |
+| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | GA (as of Apr 8, 2024) | | |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | Q1 2025 |
| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Support Announced__ | July 8, 2023 | July 8, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Support__ | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Support__ | __July 29, 2022__ | __September 29, 2023__ |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Support__ | January 26, 2023 | January 26, 2024 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Support__ | __July 29, 2022__ | __September 29, 2023__ |
## Runtime release stages
The patch policy differs based on the [runtime lifecycle stage](./runtime-for-ap
- End of Support announced runtime won't have bug and feature fixes. Security fixes are backported based on risk assessment. + ## Migration between Apache Spark versions - support
-General Upgrade guidelines/ FAQs:
+This guide provides a structured approach for users looking to upgrade their Azure Synapse Runtime for Apache Spark workloads from versions 2.4, 3.1, 3.2, or 3.3 to [the latest GA version, such as 3.4](./apache-spark-34-runtime.md). Upgrading to the most recent version enables users to benefit from performance enhancements, new features, and improved security measures. It is important to note that transitioning to a higher version may require adjustments to your existing Spark code due to incompatibilities or deprecated features.
+
+### Step 1: Evaluate and plan
+- **Assess Compatibility:** Start with reviewing Apache Spark migration guides to identify any potential incompatibilities, deprecated features, and new APIs between your current Spark version (2.4, 3.1, 3.2, or 3.3) and the target version (e.g., 3.4).
+- **Analyze Codebase:** Carefully examine your Spark code to identify the use of deprecated or modified APIs. Pay particular attention to SQL queries and User Defined Functions (UDFs), which may be affected by the upgrade.
+
+### Step 2: Create a new Spark pool for testing
+- **Create a New Pool:** In Azure Synapse, go to the Spark pools section and set up a new Spark pool. Select the target Spark version (e.g., 3.4) and configure it according to your performance requirements.
+- **Configure Spark Pool Configuration:** Ensure that all libraries and dependencies in your new Spark pool are updated or replaced to be compatible with Spark 3.4.
+
+### Step 3: Migrate and test your code
+- **Migrate Code:** Update your code to be compliant with the new or revised APIs in Apache Spark 3.4. This involves addressing deprecated functions and adopting new features as detailed in the official Apache Spark documentation.
+- **Test in Development Environment:** Test your updated code within a development environment in Azure Synapse, not locally. This step is essential for identifying and fixing any issues before moving to production.
+- **Deploy and Monitor:** After thorough testing and validation in the development environment, deploy your application to the new Spark 3.4 pool. It is critical to monitor the application for any unexpected behaviors. Utilize the monitoring tools available in Azure Synapse to keep track of your Spark applications' performance.
**Question:** What steps should be taken in migrating from 2.4 to 3.X?
-**Answer:** Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
+**Answer:** Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
**Question:** I got an error when I tried to upgrade Spark pool runtime using PowerShell cmdlet when they have attached libraries. **Answer:** Don't use PowerShell cmdlet if you have custom libraries installed in your Synapse workspace. Instead follow these steps:
- 1. Recreate Spark Pool 3.3 from the ground up.
- 1. Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3.
+1. Recreate Spark Pool 3.3 from the ground up.
+1. Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3.
## Related content - [Manage libraries for Apache Spark in Azure Synapse Analytics](apache-spark-azure-portal-add-libraries.md)-- [Synapse runtime for Apache Spark lifecycle and supportability](runtime-for-apache-spark-lifecycle-and-supportability.md)
+- [Synapse runtime for Apache Spark lifecycle and supportability](runtime-for-apache-spark-lifecycle-and-supportability.md)
synapse-analytics Azure Synapse Diagnostic Emitters Azure Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-eventhub.md
The Synapse Apache Spark diagnostic emitter extension is a library that enables
In this tutorial, you learn how to use the Synapse Apache Spark diagnostic emitter extension to emit Apache Spark applicationsΓÇÖ logs, event logs, and metrics to your Azure Event Hubs. > [!NOTE]
-> This feature is currently unavailable in the Spark 3.4 runtime but will be supported post-GA.
+> This feature is currently unavailable in the [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) runtime but will be supported post-GA.
## Collect logs and metrics to Azure Event Hubs
synapse-analytics Azure Synapse Diagnostic Emitters Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-storage.md
The Synapse Apache Spark diagnostic emitter extension is a library that enables
In this tutorial, you learn how to use the Synapse Apache Spark diagnostic emitter extension to emit Apache Spark applicationsΓÇÖ logs, event logs, and metrics to your Azure storage account. > [!NOTE]
-> This feature is currently unavailable in the Spark 3.4 runtime but will be supported post-GA.
+> This feature is currently unavailable in the [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) runtime but will be supported post-GA.
## Collect logs and metrics to storage account
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
mssparkutils.fs.fastcp('source file or directory', 'destination file or director
``` > [!NOTE]
-> The method only supports in Spark 3.3 and Spark 3.4.
+> The method only supports in [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) and [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md).
### Preview file content
mssparkutils.notebook.runMultiple(DAG)
> [!NOTE] >
-> - The method only supports in Spark 3.3 and Spark 3.4.
+> - The method only supports in [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) and [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md).
> - The parallelism degree of the multiple notebook run is restricted to the total available compute resource of a Spark session.
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- | | June 2022 | **Web Explorer new homepage** | The new Azure Synapse [Web Explorer homepage](https://dataexplorer.azure.com/home) makes it even easier to get started with Synapse Web Explorer. |
-| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery]((https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. |
+| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. |
| June 2022 | **Web Explorer dashboards drill through capabilities** | You can now [use drillthroughs as parameters in your Synapse Web Explorer dashboards](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). | | June 2022 | **Time Zone settings for Web Explorer** | The [Time Zone settings of the Web Explorer](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone) now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. | | May 2022 | **Synapse Data Explorer live query in Excel** | Using the [new Data Explorer web experience Open in Excel feature](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500), you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members. You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To create an Excel Workbook connected to Synapse Data Explorer, [start by running a query in the Web experience](https://aka.ms/adx.help.livequery). |
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
| July 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer (Preview)** | You can now use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster using the Azure portal or an ARM template. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector). | | July 2022 | **Render charts for each y column** | Synapse Web Data Explorer now supports rendering charts for each y column. For an example, see the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_6).| | June 2022 | **Web Explorer new homepage** | The new Azure Synapse [Web Explorer homepage](https://dataexplorer.azure.com/home) makes it even easier to get started with Synapse Web Explorer. |
-| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery]((https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. |
+| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. |
| June 2022 | **Web Explorer dashboards drill through capabilities** | You can now [use drillthroughs as parameters in your Synapse Web Explorer dashboards](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). | | June 2022 | **Time Zone settings for Web Explorer** | The [Time Zone settings of the Web Explorer](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone) now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards are automatically refreshed to present the data with the selected time zone. |
trusted-signing How To Sign Ci Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-sign-ci-policy.md
+
+ Title: Signing CI Policies #Required; page title is displayed in search results. Include the brand.
+description: Learn how to sign new CI policies with Trusted Signing. #Required; article description that is displayed in search results.
++++ Last updated : 04/04/2024 #Required; mm/dd/yyyy format.+++
+# Sign CI Policies with Trusted Signing
+
+To sign new CI policies with the service first install several prerequisites.
++
+Prerequisites:
+* A Trusted Signing account, Identity Validation, and Certificate Profile.
+* Ensure there are proper individual or group role assignments for signing (ΓÇ£Trusted Signing Certificate Profile SignerΓÇ¥ role).
+* [Azure PowerShell on Windows](https://learn.microsoft.com/powershell/azure/install-azps-windows) installed
+* [Az.CodeSigning](https://learn.microsoft.com/powershell/module/az.codesigning/) module downloaded
+
+Overview of steps:
+1. ΓüáUnzip the Az.CodeSigning module to a folder
+2. ΓüáOpen Windows PowerShell [PowerShell 7](https://github.com/PowerShell/PowerShell/releases/latest)
+3. In the Az.CodeSigning folder, run
+```
+Import-Module .\Az.CodeSigning.psd1
+```
+4. Optionally you can create a `metadata.json` file:
+```
+"Endpoint": "https://xxx.codesigning.azure.net/"
+"TrustedSigningAccountName": "<Trusted Signing Account Name>",
+"CertificateProfileName": "<Certificate Profile Name>",
+```
+
+5. [Get the root certificate](https://learn.microsoft.com/powershell/module/az.codesigning/get-azcodesigningrootcert) to be added to the trust store
+```
+Get-AzCodeSigningRootCert -AccountName TestAccount -ProfileName TestCertProfile -EndpointUrl https://xxx.codesigning.azure.net/ -Destination c:\temp\root.cer
+```
+Or using a metadata.json
+```
+Get-AzCodeSigningRootCert -MetadataFilePath C:\temp\metadata.json https://xxx.codesigning.azure.net/ -Destination c:\temp\root.cer
+```
+6. To get the EKU (Extended Key Usage) to insert into your policy:
+```
+Get-AzCodeSigningCustomerEku -AccountName TestAccount -ProfileName TestCertProfile -EndpointUrl https://xxx.codesigning.azure.net/
+```
+Or
+
+```
+Get-AzCodeSigningCustomerEku -MetadataFilePath C:\temp\metadata.json
+```
+7. To sign your policy, you run the invoke command:
+```
+Invoke-AzCodeSigningCIPolicySigning -accountName TestAccount -profileName TestCertProfile -endpointurl "https://xxx.codesigning.azure.net/" -Path C:\Temp\defaultpolicy.bin -Destination C:\Temp\defaultpolicy_signed.bin -TimeStamperUrl: http://timestamp.acs.microsoft.com
+```
+
+Or use a `metadata.json` file and the following command:
+
+```
+Invoke-AzCodeSigningCIPolicySigning -MetadataFilePath C:\temp\metadata.json -Path C:\Temp\defaultpolicy.bin -Destination C:\Temp\defaultpolicy_signed.bin -TimeStamperUrl: http://timestamp.acs.microsoft.com
+```
+
+## Creating and Deploying a CI Policy
+
+For steps on creating and deploying your CI policy refer to:
+* [Use signed policies to protect Windows Defender Application Control against tampering](https://learn.microsoft.com/windows/security/application-security/application-control/windows-defender-application-control/deployment/use-signed-policies-to-protect-wdac-against-tampering)
+* [Windows Defender Application Control design guide](https://learn.microsoft.com/windows/security/application-security/application-control/windows-defender-application-control/design/wdac-design-guide)
+
trusted-signing How To Signing Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-signing-integrations.md
Previously updated : 03/21/2024 #Required; mm/dd/yyyy format. Last updated : 04/04/2024 #Required; mm/dd/yyyy format.
Trusted Signing currently supports the following signing integrations:
* ADO Task * PowerShell for Authenticode * Azure PowerShell - App Control for Business CI Policy
-We constantly work to support more signing integrations and will update the above list if/when more are available.
+
+We constantly work to support more signing integrations and update the above when more become available.
This article explains how to set up each of the above Trusted Signing signing integrations.
The components that SignTool.exe uses to interface with Trusted Signing require
### Download and install Trusted Signing Dlib package Complete these steps to download and install the Trusted Signing Dlib package (.ZIP):
-1. Download the [Trusted Signing Dlib package](https://www.nuget.org/packages/Azure.CodeSigning.Client).
+1. Download the [Trusted Signing Dlib package](https://www.nuget.org/packages/Microsoft.Trusted.Signing.Client).
2. Extract the Trusted Signing Dlib zip content and install it onto your signing node in a directory of your choice. YouΓÇÖre required to install it onto the node youΓÇÖll be signing files from with SignTool.exe.
To sign using Trusted Signing, you need to provide the details of your Trusted S
``` {   "Endpoint": "<Code Signing Account Endpoint>",
-  "CodeSigningAccountName": "<Code Signing Account Name>",
+  "TrustedSigningAccountName": "<Trusted Signing Account Name>",
  "CertificateProfileName": "<Certificate Profile Name>",   "CorrelationId": "<Optional CorrelationId*>" }
Trusted Signing certificates have a 3-day validity, so timestamping is critical
## Use other signing integrations with Trusted Signing This section explains how to set up other not [SignTool](#set-up-signtool-with-trusted-signing) signing integrations with Trusting Signing.
-* GitHub Action ΓÇô To use the GitHub action for Trusted Signing, visit [Azure Code Signing ┬╖ Actions ┬╖ GitHub Marketplace](https://github.com/marketplace/actions/azure-code-signing) and follow the instructions to set up and use GitHub action.
+* GitHub Action ΓÇô To use the GitHub action for Trusted Signing, visit [Trusted Signing ┬╖ Actions ┬╖ GitHub Marketplace](https://github.com/azure/trusted-signing-action) and follow the instructions to set up and use GitHub action.
-* ADO Task ΓÇô To use the Trusted Signing AzureDevOps task, visit [Azure Code Signing - Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=VisualStudioClient.AzureCodeSigning) and follow the instructions for setup.
+* ADO Task ΓÇô To use the Trusted Signing AzureDevOps task, visit [Trusted Signing - Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=VisualStudioClient.TrustedSigning&ssr=false#overview) and follow the instructions for setup.
-* PowerShell for Authenticode ΓÇô To use PowerShell for Trusted Signing, visit [PowerShell Gallery | AzureCodeSigning 0.2.15](https://www.powershellgallery.com/packages/AzureCodeSigning/0.2.15) to install the PowerShell module.
+* PowerShell for Authenticode ΓÇô To use PowerShell for Trusted Signing, visit [PowerShell Gallery | Trusted Signing 0.3.8](https://www.powershellgallery.com/packages/TrustedSigning/0.3.8) to install the PowerShell module.
-* Azure PowerShell ΓÇô App Control for Business CI Policy - App Control for Windows [link to CI policy signing tutorial].
+* Azure PowerShell: App Control for Business CI Policy ΓÇô To use Trusted Signing for CI policy signing follow the instructions at [Signing a New CI policy](./how-to-sign-ci-policy.md) and visit the [Az.CodeSigning PowerShell Module](https://learn.microsoft.com/powershell/azure/install-azps-windows).
* Trusted Signing SDK ΓÇô To create your own signing integration our [Trusted Signing SDK](https://www.nuget.org/packages/Azure.CodeSigning.Sdk) is publicly available.
trusted-signing Tutorial Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/tutorial-assign-roles.md
The Identity Verified role specifically is needed to manage Identity Validation
## Assign roles in Trusting Signing Complete the following steps to assign roles in Trusted Signing.+ 1. Navigate to your Trusted Signing account on the Azure portal and select the **Access Control (IAM)** tab in the left menu. 2. Select on the **Roles** tab and search "Trusted Signing". You can see in the screenshot below the two custom roles. ![Screenshot of Azure portal UI with the Trusted Signing custom RBAC roles.](./media/trusted-signing-rbac-roles.png)
-3. To assign these roles, select on the **Add** drop down and select **Add role assignment**. Follow the [Assign roles in Azure](../role-based-access-control/role-assignments-portal.md) guide to assign the relevant roles to your identities.
+3. To assign these roles, select on the **Add** drop down and select **Add role assignment**. Follow the [Assign roles in Azure](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current) guide to assign the relevant roles to your identities. _You'll need at least a Contributor role to create a Trusted Signing account and certificate profile._
+4. For more granular access control on the certificate profile level, you can use the Azure CLI to assign roles. The following commands can be used to assign the _Code Signing Certificate Profile Signer_ role to users/service principles to sign files.
+```
+az role assignment create --assignee <objectId of user/service principle>
+--role "Trusted Signing Certificate Profile Signer"
+--scope "/subscriptions/<subscriptionId>/resourceGroups/<resource-group-name>/providers/Microsoft.CodeSigning/trustedSigningAccounts/<trustedsigning-account-name>/certificateProfiles/<profileName>"
+```
## Related content * [What is Azure role-based access control (RBAC)?](../role-based-access-control/overview.md)
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
description: This article provides a summary of supported regions and operating
Previously updated : 03/26/2024 Last updated : 04/01/2024
Update Manager doesn't support driver updates.
### Extended Security Updates (ESU) for Windows Server
-Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. To enroll in Windows Server 2012 Extended Security Updates, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2](/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc)
+Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. To enroll in Windows Server 2012 Extended Security Updates, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2.](/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc)
### First-party updates on Windows
By default, the Windows Update client is configured to provide updates only for
Use one of the following options to perform the settings change at scale: -- For servers configured to patch on a schedule from Update Manager (with VM `PatchSettings` set to `AutomaticByPlatform = Azure-Orchestrated`), and for all Windows Servers running on an earlier operating system than Windows Server 2016, run the following PowerShell script on the server you want to change:
+- For servers configured to patch on a schedule from Update Manager (with virtual machine `PatchSettings` set to `AutomaticByPlatform = Azure-Orchestrated`), and for all Windows Servers running on an earlier operating system than Windows Server 2016, run the following PowerShell script on the server you want to change:
```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
Use one of the following options to perform the settings change at scale:
$ServiceManager.AddService2($ServiceId,7,"") ``` -- For servers running Windows Server 2016 or later that aren't using Update Manager scheduled patching (with VM `PatchSettings` set to `AutomaticByOS = Azure-Orchestrated`), you can use Group Policy to control this process by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+- For servers running Windows Server 2016 or later that aren't using Update Manager scheduled patching (with virtual machine `PatchSettings` set to `AutomaticByOS = Azure-Orchestrated`), you can use Group Policy to control this process by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
> [!NOTE] > Run the following PowerShell script on the server to disable first-party updates:
Use one of the following options to perform the settings change at scale:
> $ServiceManager.RemoveService($ServiceId) > ```
-### Third-party updates
+### Third party updates
-**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
+**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
-**Linux**: If you include a specific third-party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package isn't available for assessment and installation if you remove it.
+**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package isn't available for assessment and installation if you remove it.
Update Manager doesn't support managing the Configuration Manager client.
Update Manager doesn't support managing the Configuration Manager client.
Update Manager scales to all regions for both Azure VMs and Azure Arc-enabled servers. The following table lists the Azure public cloud where you can use Update Manager.
-# [Azure VMs](#tab/azurevm)
+#### [Azure Public cloud](#tab/public)
+
+### Azure VMs
Azure Update Manager is available in all Azure public regions where compute virtual machines are available.
-# [Azure Arc-enabled servers](#tab/azurearc)
+### Azure Arc-enabled servers
+ Azure Update Manager is currently supported in the following regions. It implies that VMs must be in the following regions.
UAE | UAE North
United Kingdom | UK South </br> UK West United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
+#### [Azure for US Government](#tab/gov)
+
+**Geography** | **Supported regions** | **Details**
+ | |
+United States | USGovVirginia </br> USGovArizona </br> USGovTexas | For both Azure and Arc VMs </br> For both Azure and Arc VMs </br> For Azure VMs only
+
+#### [Azure operated by 21Vianet](#tab/21via)
+
+**Geography** | **Supported regions** | **Details**
+ | |
+China | ChinaEast </br> ChinaEast2 </br> ChinaNorth </br> ChinaNorth2 | For Azure VMs only </br> For both Azure and Arc VMs </br> For Azure VMs only </br> For both Azure and Arc VMs.
++ ## Supported operating systems >[!NOTE] > - All operating systems are assumed to be x64. For this reason, x86 isn't supported for any operating system.
-> - Update Manager doesn't support VMs created from CIS-hardened images.
+> - Update Manager doesn't support virtual machines created from CIS-hardened images.
### Support for Azure Update Manager operations
Following is the list of supported images and no other marketplace images releas
| **Publisher**| **Offer** | **SKU**| **Unsupported image(s)** | |-|-|--| |
-|microsoftwindowsserver | windowsserver | * | windowsserver 2008|
+|microsoftwindowsserver | windows server | * | windowsserver 2008|
|microsoftbiztalkserver | biztalk-server | *| |microsoftdynamicsax | dynamics | * | |microsoftpowerbi |* |* | |microsoftsharepoint | microsoftsharepointserver | *|
-|microsoftvisualstudio | Visualstudio* | *-ws2012r2. </br> *-ws2016-ws2019 </br> *-ws2022 |
+|microsoftvisualstudio | Visualstudio* | *-ws2012r2 </br> *-ws2016-ws2019 </br> *-ws2022 |
|microsoftwindowsserver | windows-cvm | * | |microsoftwindowsserver | windowsserverdotnet | *| |microsoftwindowsserver | windowsserver-gen2preview | *|
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Last updated 04/03/2024
[Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Azure Update Manager.
+## April 2024
+
+### New region support
+
+Azure Update Manager is now supported in US Government and Microsoft Azure operated by 21Vianet. [Learn more](support-matrix.md#supported-regions)
++ ## February 2024 ### Migration scripts to move machines and schedules from Automation Update Management to Azure Update Manager (preview)
-Migration scripts allow you to move all machines and schedules in an automation account from Automation Update Management to azure Update Management in an automated fashion. [Learn more](guidance-migration-automation-update-management-azure-update-manager.md).
+Migration scripts allow you to move all machines and schedules in an automation account from Automation Update Management to Azure Update Management in an automated fashion. [Learn more](guidance-migration-automation-update-management-azure-update-manager.md).
### Updates blade in Azure Update Manager (preview)
Dynamic scope is an advanced capability of schedule patching. You can now create
### Customized image support
-Update Manager now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems).
+Update Manager now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images. See the [list of supported operating systems](support-matrix.md#supported-operating-systems).
### Multi-subscription support
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
description: Learn how to add session hosts virtual machines to a host pool in A
Previously updated : 01/24/2024 Last updated : 02/28/2024 # Add session hosts to a host pool
Here's how to create session hosts and register them to a host pool using the Az
| Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. | | Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). | | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. |
- | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. You will need to self-register your subscription to use the hibernation feature. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). If you're using Teams media optimizations you should update the [WebRTC redirector service to 1.45.2310.13001](whats-new-webrtc.md#updates-for-version-145231013001).|
+ | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). If you're using Teams media optimizations you should update the [WebRTC redirector service to 1.45.2310.13001](whats-new-webrtc.md#updates-for-version-145231013001).|
| Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session hosts at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). | | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. | | OS disk size | Select a size for the OS disk.<br /><br />If you enable hibernate, ensure the OS disk is large enough to store the contents of the memory in addition to the OS and other applications. |
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Title: Create and assign an autoscale scaling plan for Azure Virtual Desktop
description: How to create and assign an autoscale scaling plan to optimize deployment costs. Previously updated : 01/10/2024 Last updated : 02/28/2024
To use scaling plans, make sure you follow these guidelines:
- Scaling plan configuration data must be stored in the same region as the host pool configuration. Deploying session host VMs is supported in all Azure regions. - When using autoscale for pooled host pools, you must have a configured *MaxSessionLimit* parameter for that host pool. Don't use the default value. You can configure this value in the host pool settings in the Azure portal or run the [New-AzWvdHostPool](/powershell/module/az.desktopvirtualization/new-azwvdhostpool) or [Update-AzWvdHostPool](/powershell/module/az.desktopvirtualization/update-azwvdhostpool) PowerShell cmdlets. - You must grant Azure Virtual Desktop access to manage the power state of your session host VMs. You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscriptions in order to assign the role-based access control (RBAC) role for the Azure Virtual Desktop service principal on those subscriptions. This is part of **User Access Administrator** and **Owner** built in roles.-- If you want to use personal desktop autoscale with hibernation (preview), you will need to [self-register your subscription](../virtual-machines/hibernate-resume.md) and enable the hibernation feature when [creating VMs](deploy-azure-virtual-desktop.md) for your personal host pool. For the full list of prerequisites for hibernation, see [Prerequisites to use hibernation](../virtual-machines/hibernate-resume.md).
+- If you want to use personal desktop autoscale with hibernation (preview), you will need enable the hibernation feature when [creating VMs](deploy-azure-virtual-desktop.md) for your personal host pool. For the full list of prerequisites for hibernation, see [Prerequisites to use hibernation](../virtual-machines/hibernate-resume.md).
> [!IMPORTANT] > Hibernation is currently in PREVIEW.
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
Previously updated : 01/24/2024 Last updated : 02/28/2024 # Deploy Azure Virtual Desktop
Here's how to create a host pool using the Azure portal.
| Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. | | Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). | | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. |
- | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. You will need to self-register your subscription to use the hibernation feature. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). If you're using Teams media optimizations you should update the [WebRTC redirector service to 1.45.2310.13001](whats-new-webrtc.md#updates-for-version-145231013001).|
+ | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). If you're using Teams media optimizations you should update the [WebRTC redirector service to 1.45.2310.13001](whats-new-webrtc.md#updates-for-version-145231013001).|
| Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session hosts at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). | | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. | | OS disk size | Select a size for the OS disk.<br /><br />If you enable hibernate, ensure the OS disk is large enough to store the contents of the memory in addition to the OS and other applications. |
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
Previously updated : 11/28/2023 Last updated : 04/08/2024
yourDiskID=$(az disk show -n $diskName -g $resourceGroupName --query "id" --outp
# Create the snapshot snapshot=$(az snapshot create -g $resourceGroupName -n $snapshotName --source $yourDiskID --incremental true)
-az disk create -g resourceGroupName -n newDiskName --source $snapshot --logical-sector-size $logicalSectorSize --location $location --zone $zone
+az disk create -g resourceGroupName -n newDiskName --source $snapshot --logical-sector-size $logicalSectorSize --location $location --zone $zone --sku $storageType
```
virtual-machines Hibernate Resume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume.md
The following Windows operating systems support hibernation:
- Capacity reservations ## Prerequisites to use hibernation-- The hibernate feature is enabled for your subscription. - A persistent OS disk large enough to store the contents of the RAM, OS and other applications running on the VM is connected. - The VM size supports hibernation. - The VM OS supports hibernation.
The following Windows operating systems support hibernation:
- Hibernation is enabled on your VM when creating the VM. - If a VM is being created from an OS disk or a Compute Gallery image, then the OS disk or Gallery Image definition supports hibernation.
-## Enabling hibernation feature for your subscription
-Use the following steps to enable this feature for your subscription:
-
-### [Portal](#tab/enablehiberPortal)
-1. In your Azure subscription, go to the Settings section and select 'Preview features'.
-1. Search for 'hibernation'.
-1. Check the 'Hibernation Preview' item.
-1. Click 'Register'.
-
-![Screenshot showing the Azure subscription preview portal with 4 numbers representing different steps in enabling the hibernation feature.](./media/hibernate-resume/hibernate-register-preview-feature.png)
-
-### [PowerShell](#tab/enablehiberPS)
-```powershell
-Register-AzProviderFeature -FeatureName "VMHibernationPreview" -ProviderNamespace "Microsoft.Compute"
-```
-### [CLI](#tab/enablehiberCLI)
-```azurecli
-az feature register --name VMHibernationPreview --namespace Microsoft.Compute
-```
--
-Confirm that the registration state is Registered (registration takes a few minutes) using the following command before trying out the feature.
-
-### [Portal](#tab/checkhiberPortal)
-In the Azure portal under 'Preview features', select 'Hibernation Preview'. The registration state should show as 'Registered'.
-
-![Screenshot showing the Azure subscription preview portal with the hibernation feature listed as registered.](./media/hibernate-resume/hibernate-is-registered-preview-feature.png)
-
-### [PowerShell](#tab/checkhiberPS)
-```powershell
-Get-AzProviderFeature -FeatureName "VMHibernationPreview" -ProviderNamespace "Microsoft.Compute"
-```
-### [CLI](#tab/checkhiberCLI)
-```azurecli
-az feature show --name VMHibernationPreview --namespace Microsoft.Compute
-```
-- ## Getting started with hibernation To hibernate a VM, you must first enable the feature while creating the VM. You can only enable hibernation for a VM on initial creation. You can't enable this feature after the VM is created.
virtual-wan How To Palo Alto Cloud Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-palo-alto-cloud-ngfw.md
To create a new virtual WAN, use the steps in the following article:
## Known limitations
-* Check [Palo Alto Networks documentation]() for the list of regions where Palo Alto Networks Cloud NGFW is available.
+* Check [Palo Alto Networks documentation](https://docs.paloaltonetworks.com/cloud-ngfw/azure/cloud-ngfw-for-azure/getting-started-with-cngfw-for-azure/supported-regions-and-zones) for the list of regions where Palo Alto Networks Cloud NGFW is available.
* Palo Alto Networks Cloud NGFW can't be deployed with Network Virtual Appliances in the Virtual WAN hub. * All other limitations in the [Routing Intent and Routing policies documentation limitations section](how-to-routing-policies.md) apply to Palo Alto Networks Cloud NGFW deployments in Virtual WAN.
vpn-gateway Bgp Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-diagnostics.md
description: Learn how to view important BGP-related information for troubleshooting. -+ Last updated 03/10/2021
vpn-gateway Ipsec Ike Policy Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ipsec-ike-policy-howto.md
description: Learn how to configure IPsec/IKE custom policy for S2S or VNet-to-V
Previously updated : 01/30/2023 Last updated : 04/04/2024 - + # Configure custom IPsec/IKE connection policies for S2S VPN and VNet-to-VNet: Azure portal This article walks you through the steps to configure IPsec/IKE policy for VPN Gateway Site-to-Site VPN or VNet-to-VNet connections using the Azure portal. The following sections help you create and configure an IPsec/IKE policy, and apply the policy to a new or existing connection.
This article walks you through the steps to configure IPsec/IKE policy for VPN G
The instructions in this article help you set up and configure IPsec/IKE policies as shown in the following diagram. 1. Create a virtual network and a VPN gateway. 1. Create a local network gateway for cross premises connection, or another virtual network and gateway for VNet-to-VNet connection.
The following table lists the corresponding Diffie-Hellman groups supported by t
[!INCLUDE [Diffie-Hellman groups](../../includes/vpn-gateway-ipsec-ike-diffie-hellman-include.md)]
-Refer to [RFC3526](https://tools.ietf.org/html/rfc3526) and [RFC5114](https://tools.ietf.org/html/rfc5114) for more details.
+For more information, see [RFC3526](https://tools.ietf.org/html/rfc3526) and [RFC5114](https://tools.ietf.org/html/rfc5114).
## <a name="crossprem"></a>Create S2S VPN connection with custom policy
-This section walks you through the steps to create a Site-to-Site VPN connection with an IPsec/IKE policy. The following steps create the connection as shown in the following diagram:
+This section walks you through the steps to create a Site-to-Site VPN connection with an IPsec/IKE policy. The following steps create the connection as shown in the following diagram. The on-premises site in this diagram represents **Site6**.
### Step 1: Create the virtual network, VPN gateway, and local network gateway for TestVNet1
-Create the following resources.For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
+Create the following resources. For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
1. Create the virtual network **TestVNet1** using the following values.
Configure a custom IPsec/IKE policy with the following algorithms and parameters
The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are similar to that of an S2S VPN connection. You must complete the previous sections in [Create an S2S vpn connection](#crossprem) to create and configure TestVNet1 and the VPN gateway. ### Step 1: Create the virtual network, VPN gateway, and local network gateway for TestVNet2
-Use the steps in the [Create a VNet-to-VNet connection](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2 and create a VNet-to-VNet connection to TestVNet1.
+Use the steps in the [Create a VNet-to-VNet connection](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2, and create a VNet-to-VNet connection to TestVNet1.
Example values:
Example values:
### Step 2: Configure the VNet-to-VNet connection
-1. From the VNet1GW gateway, add a VNet-to-VNet connection to VNet2GW, **VNet1toVNet2**.
+1. From the VNet1GW gateway, add a VNet-to-VNet connection to VNet2GW named **VNet1toVNet2**.
-1. Next, from the VNet2GW, add a VNet-to-VNet connection to VNet1GW, **VNet2toVNet1**.
+1. Next, from the VNet2GW, add a VNet-to-VNet connection to VNet1GW named **VNet2toVNet1**.
1. After you add the connections, you'll see the VNet-to-VNet connections as shown in the following screenshot from the VNet2GW resource:
Example values:
1. After you complete these steps, the connection is established in a few minutes, and you'll have the following network topology.
- :::image type="content" source="./media/ipsec-ike-policy-howto/policy-diagram.png" alt-text="Diagram shows IPsec/IKE policy." border="false" lightbox="./media/ipsec-ike-policy-howto/policy-diagram.png":::
+ :::image type="content" source="./media/ipsec-ike-policy-howto/policy-diagram.png" alt-text="Diagram shows IPsec/IKE policy for VNet-to-VNet and S2S VPN." lightbox="./media/ipsec-ike-policy-howto/policy-diagram.png":::
## To remove custom policy from a connection 1. To remove a custom policy from a connection, go to the connection resource.
-1. On the **Configuration** page, change the IPse /IKE policy from **Custom** to **Default**. This will remove all custom policy previously specified on the connection, and restore the Default IPsec/IKE settings on this connection.
+1. On the **Configuration** page, change the IPse /IKE policy from **Custom** to **Default**. This removes all custom policy previously specified on the connection, and restore the Default IPsec/IKE settings on this connection.
1. Select **Save** to remove the custom policy and restore the default IPsec/IKE settings on the connection. ## IPsec/IKE policy FAQ
To view frequently asked questions, go to the IPsec/IKE policy section of the [V
## Next steps
-See [Connect multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md) for more details regarding policy-based traffic selectors.
+For more information about policy-based traffic selectors, see [Connect multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md).
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
DRS 2.1 includes 17 rule groups, as shown in the following table. Each group con
> [!NOTE] > DRS 2.1 is only available on Azure Front Door Premium.
-|Rule group|Description|
-|||
-|[General](#general-21)|General group|
-|[METHOD-ENFORCEMENT](#drs911-21)|Lock-down methods (PUT, PATCH)|
-|[PROTOCOL-ENFORCEMENT](#drs920-21)|Protect against protocol and encoding issues|
-|[PROTOCOL-ATTACK](#drs921-21)|Protect against header injection, request smuggling, and response splitting|
-|[APPLICATION-ATTACK-LFI](#drs930-21)|Protect against file and path attacks|
-|[APPLICATION-ATTACK-RFI](#drs931-21)|Protect against remote file inclusion (RFI) attacks|
-|[APPLICATION-ATTACK-RCE](#drs932-21)|Protect again remote code execution attacks|
-|[APPLICATION-ATTACK-PHP](#drs933-21)|Protect against PHP-injection attacks|
-|[APPLICATION-ATTACK-NodeJS](#drs934-21)|Protect against Node JS attacks|
-|[APPLICATION-ATTACK-XSS](#drs941-21)|Protect against cross-site scripting attacks|
-|[APPLICATION-ATTACK-SQLI](#drs942-21)|Protect against SQL-injection attacks|
-|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-21)|Protect against session-fixation attacks|
-|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-21)|Protect against JAVA attacks|
-|[MS-ThreatIntel-WebShells](#drs9905-21)|Protect against Web shell attacks|
-|[MS-ThreatIntel-AppSec](#drs9903-21)|Protect against AppSec attacks|
-|[MS-ThreatIntel-SQLI](#drs99031-21)|Protect against SQLI attacks|
-|[MS-ThreatIntel-CVEs](#drs99001-21)|Protect against CVE attacks|
+|Rule group|Managed rule group ID|Description|
+||||
+|[General](#general-21)|General|General group|
+|[METHOD-ENFORCEMENT](#drs911-21)|METHOD-ENFORCEMENT|Lock-down methods (PUT, PATCH)|
+|[PROTOCOL-ENFORCEMENT](#drs920-21)|PROTOCOL-ENFORCEMENT|Protect against protocol and encoding issues|
+|[PROTOCOL-ATTACK](#drs921-21)|PROTOCOL-ATTACK|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-21)|LFI|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-21)|RFI|Protect against remote file inclusion (RFI) attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-21)|RCE|Protect again remote code execution attacks|
+|[APPLICATION-ATTACK-PHP](#drs933-21)|PHP|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-NodeJS](#drs934-21)|NODEJS|Protect against Node JS attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-21)|XSS|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-21)|SQLI|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-21)|FIX|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-21)|JAVA|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-21)|MS-ThreatIntel-WebShells|Protect against Web shell attacks|
+|[MS-ThreatIntel-AppSec](#drs9903-21)|MS-ThreatIntel-AppSec|Protect against AppSec attacks|
+|[MS-ThreatIntel-SQLI](#drs99031-21)|MS-ThreatIntel-SQLI|Protect against SQLI attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-21)|MS-ThreatIntel-CVEs|Protect against CVE attacks|
#### Disabled rules
DRS 2.0 includes 17 rule groups, as shown in the following table. Each group con
> [!NOTE] > DRS 2.0 is only available on Azure Front Door Premium.
-|Rule group|Description|
-|||
-|[General](#general-20)|General group|
-|[METHOD-ENFORCEMENT](#drs911-20)|Lock-down methods (PUT, PATCH)|
-|[PROTOCOL-ENFORCEMENT](#drs920-20)|Protect against protocol and encoding issues|
-|[PROTOCOL-ATTACK](#drs921-20)|Protect against header injection, request smuggling, and response splitting|
-|[APPLICATION-ATTACK-LFI](#drs930-20)|Protect against file and path attacks|
-|[APPLICATION-ATTACK-RFI](#drs931-20)|Protect against remote file inclusion (RFI) attacks|
-|[APPLICATION-ATTACK-RCE](#drs932-20)|Protect again remote code execution attacks|
-|[APPLICATION-ATTACK-PHP](#drs933-20)|Protect against PHP-injection attacks|
-|[APPLICATION-ATTACK-NodeJS](#drs934-20)|Protect against Node JS attacks|
-|[APPLICATION-ATTACK-XSS](#drs941-20)|Protect against cross-site scripting attacks|
-|[APPLICATION-ATTACK-SQLI](#drs942-20)|Protect against SQL-injection attacks|
-|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-20)|Protect against session-fixation attacks|
-|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-20)|Protect against JAVA attacks|
-|[MS-ThreatIntel-WebShells](#drs9905-20)|Protect against Web shell attacks|
-|[MS-ThreatIntel-AppSec](#drs9903-20)|Protect against AppSec attacks|
-|[MS-ThreatIntel-SQLI](#drs99031-20)|Protect against SQLI attacks|
-|[MS-ThreatIntel-CVEs](#drs99001-20)|Protect against CVE attacks|
+|Rule group|Managed rule group ID|Description|
+||||
+|[General](#general-20)|General|General group|
+|[METHOD-ENFORCEMENT](#drs911-20)|METHOD-ENFORCEMENT|Lock-down methods (PUT, PATCH)|
+|[PROTOCOL-ENFORCEMENT](#drs920-20)|PROTOCOL-ENFORCEMENT|Protect against protocol and encoding issues|
+|[PROTOCOL-ATTACK](#drs921-20)|PROTOCOL-ATTACK|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-20)|LFI|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-20)|RFI|Protect against remote file inclusion (RFI) attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-20)|RCE|Protect again remote code execution attacks|
+|[APPLICATION-ATTACK-PHP](#drs933-20)|PHP|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-NodeJS](#drs934-20)|NODEJS|Protect against Node JS attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-20)|XSS|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-20)|SQLI|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-20)|FIX|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-20)|JAVA|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-20)|MS-ThreatIntel-WebShells|Protect against Web shell attacks|
+|[MS-ThreatIntel-AppSec](#drs9903-20)|MS-ThreatIntel-AppSec|Protect against AppSec attacks|
+|[MS-ThreatIntel-SQLI](#drs99031-20)|MS-ThreatIntel-SQLI|Protect against SQLI attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-20)|MS-ThreatIntel-CVEs|Protect against CVE attacks|
### DRS 1.1
-|Rule group|Description|
-|||
-|[PROTOCOL-ATTACK](#drs921-11)|Protect against header injection, request smuggling, and response splitting|
-|[APPLICATION-ATTACK-LFI](#drs930-11)|Protect against file and path attacks|
-|[APPLICATION-ATTACK-RFI](#drs931-11)|Protection against remote file inclusion attacks|
-|[APPLICATION-ATTACK-RCE](#drs932-11)|Protection against remote command execution|
-|[APPLICATION-ATTACK-PHP](#drs933-11)|Protect against PHP-injection attacks|
-|[APPLICATION-ATTACK-XSS](#drs941-11)|Protect against cross-site scripting attacks|
-|[APPLICATION-ATTACK-SQLI](#drs942-11)|Protect against SQL-injection attacks|
-|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-11)|Protect against session-fixation attacks|
-|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-11)|Protect against JAVA attacks|
-|[MS-ThreatIntel-WebShells](#drs9905-11)|Protect against Web shell attacks|
-|[MS-ThreatIntel-AppSec](#drs9903-11)|Protect against AppSec attacks|
-|[MS-ThreatIntel-SQLI](#drs99031-11)|Protect against SQLI attacks|
-|[MS-ThreatIntel-CVEs](#drs99001-11)|Protect against CVE attacks|
+|Rule group|Managed rule group ID|Description|
+||||
+|[PROTOCOL-ATTACK](#drs921-11)|PROTOCOL-ATTACK|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-11)|LFI|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-11)|RFI|Protection against remote file inclusion attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-11)|RCE|Protection against remote command execution|
+|[APPLICATION-ATTACK-PHP](#drs933-11)|PHP|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-11)|XSS|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-11)|SQLI|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-11)|FIX|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-11)|JAVA|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-11)|MS-ThreatIntel-WebShells|Protect against Web shell attacks|
+|[MS-ThreatIntel-AppSec](#drs9903-11)|MS-ThreatIntel-AppSec|Protect against AppSec attacks|
+|[MS-ThreatIntel-SQLI](#drs99031-11)|MS-ThreatIntel-SQLI|Protect against SQLI attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-11)|MS-ThreatIntel-CVEs|Protect against CVE attacks|
### DRS 1.0
-|Rule group|Description|
-|||
-|[PROTOCOL-ATTACK](#drs921-10)|Protect against header injection, request smuggling, and response splitting|
-|[APPLICATION-ATTACK-LFI](#drs930-10)|Protect against file and path attacks|
-|[APPLICATION-ATTACK-RFI](#drs931-10)|Protection against remote file inclusion attacks|
-|[APPLICATION-ATTACK-RCE](#drs932-10)|Protection against remote command execution|
-|[APPLICATION-ATTACK-PHP](#drs933-10)|Protect against PHP-injection attacks|
-|[APPLICATION-ATTACK-XSS](#drs941-10)|Protect against cross-site scripting attacks|
-|[APPLICATION-ATTACK-SQLI](#drs942-10)|Protect against SQL-injection attacks|
-|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-10)|Protect against session-fixation attacks|
-|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-10)|Protect against JAVA attacks|
-|[MS-ThreatIntel-WebShells](#drs9905-10)|Protect against Web shell attacks|
-|[MS-ThreatIntel-CVEs](#drs99001-10)|Protect against CVE attacks|
+|Rule group|Managed rule group ID|Description|
+||||
+|[PROTOCOL-ATTACK](#drs921-10)|PROTOCOL-ATTACK|Protect against header injection, request smuggling, and response splitting|
+|[APPLICATION-ATTACK-LFI](#drs930-10)|LFI|Protect against file and path attacks|
+|[APPLICATION-ATTACK-RFI](#drs931-10)|RFI|Protection against remote file inclusion attacks|
+|[APPLICATION-ATTACK-RCE](#drs932-10)|RCE|Protection against remote command execution|
+|[APPLICATION-ATTACK-PHP](#drs933-10)|PHP|Protect against PHP-injection attacks|
+|[APPLICATION-ATTACK-XSS](#drs941-10)|XSS|Protect against cross-site scripting attacks|
+|[APPLICATION-ATTACK-SQLI](#drs942-10)|SQLI|Protect against SQL-injection attacks|
+|[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-10)|FIX|Protect against session-fixation attacks|
+|[APPLICATION-ATTACK-SESSION-JAVA](#drs944-10)|JAVA|Protect against JAVA attacks|
+|[MS-ThreatIntel-WebShells](#drs9905-10)|MS-ThreatIntel-WebShells|Protect against Web shell attacks|
+|[MS-ThreatIntel-CVEs](#drs99001-10)|MS-ThreatIntel-CVEs|Protect against CVE attacks|
### Bot rules