Updates from: 01/09/2024 02:11:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technicalprofiles.md
Previously updated : 06/22/2023 Last updated : 01/08/2024
All types of technical profiles share the same concept. They start by reading th
- Calls a REST API while sending parameters as InputClaims and getting information back as OutputClaims. - Creates or updates the user account. - Sends and verifies the multifactor authentication text message.
-1. **Validation technical profiles**: A [self-asserted technical profile](self-asserted-technical-profile.md) can call [validation technical profiles](validation-technical-profile.md) to validate the data profiled by the user.
+1. **Validation technical profiles**: A [self-asserted technical profile](self-asserted-technical-profile.md) can call [validation technical profiles](validation-technical-profile.md) to validate the data profiled by the user. Only self-asserted technical profiles can use validation technical profiles.
1. **Output claims**: Claims are returned back to the claims bag. You can use those claims in the next orchestrations step or output claims transformations. 1. **Output claims transformations**: After the technical profile is completed, Azure AD B2C runs output [claims transformations](claimstransformations.md). 1. **SSO session management**: Persists the technical profile's data to the session by using [SSO session management](custom-policy-reference-sso.md).
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Previously updated : 11/20/2023 Last updated : 01/08/2024
Azure AD B2C directory user profile supports the [user resource type](/graph/api
|creationType |String|If the user account was created as a local account for an Azure Active Directory B2C tenant, the value is LocalAccount or nameCoexistence. Read only.|No|No|Persisted, Output| |dateOfBirth |Date|Date of birth.|No|No|Persisted, Output| |department |String|The name for the department in which the user works. Max length 64.|Yes|No|Persisted, Output|
-|displayName |String|The display name for the user. Max length 256. \< \> characters aren't allowed. | Yes|Yes|Persisted, Output|
+|displayName |String|The display name for the user. Max length 256. | Yes|Yes|Persisted, Output|
|facsimileTelephoneNumber<sup>1</sup>|String|The telephone number of the user's business fax machine.|Yes|No|Persisted, Output| |givenName |String|The given name (first name) of the user. Max length 64.|Yes|Yes|Persisted, Output| |jobTitle |String|The user's job title. Max length 128.|Yes|Yes|Persisted, Output|
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## January 2024
+
+### Content Safety SDK GA
+
+The Azure AI Content Safety service is now generally available through the following client library SDKs:
+
+- **C#**: [Package](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [API reference](/dotnet/api/overview/azure/ai.contentsafety-readme?view=azure-dotnet) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0)
+- **Python**: [Package](https://pypi.org/project/azure-ai-contentsafety/) | [API reference](/python/api/overview/azure/ai-contentsafety-readme?view=azure-python) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/python/1.0.0)
+- **Java**: [Package](https://oss.sonatype.org/#nexus-search;quick~contentsafety) | [API reference](/java/api/overview/azure/ai-contentsafety-readme?view=azure-java-stable) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/java/1.0.0)
+- **JavaScript**: [Package](https://www.npmjs.com/package/@azure-rest/ai-content-safety?activeTab=readme) | [API reference](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/js/1.0.0)
+ ## November 2023 ### Jailbreak risk and Protected material detection (preview)
ai-services Concept Retrieval Augumented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-retrieval-augumented-generation.md
monikerRange: '>=doc-intel-3.1.0'
Retrieval-Augmented Generation (RAG) is a design pattern that combines a pretrained Large Language Model (LLM) like ChatGPT with an external data retrieval system to generate an enhanced response incorporating new data outside of the original training data. Adding an information retrieval system to your applications enables you to chat with your documents, generate captivating content, and access the power of Azure OpenAI models for your data. You also have more control over the data used by the LLM as it formulates a response.
-The Document Intelligence [Layout model](concept-layout.md) is an advanced machine-learning based document analysis API. The Layout model offers a comprehensive solution for advanced content extraction and document structure analysis capabilities. With the Layout model, you can easily extract text and structural to divide large bodies of text into smaller, meaningful chunks based on semantic content rather than arbitrary splits. The extracted information can be conveniently outputted to Markdown format, enabling you to define your semantic chunking strategy based on the provided building blocks.
+The Document Intelligence [Layout model](concept-layout.md) is an advanced machine-learning based document analysis API. The Layout model offers a comprehensive solution for advanced content extraction and document structure analysis capabilities. With the Layout model, you can easily extract text and structural elements to divide large bodies of text into smaller, meaningful chunks based on semantic content rather than arbitrary splits. The extracted information can be conveniently outputted to Markdown format, enabling you to define your semantic chunking strategy based on provided building blocks.
:::image type="content" source="media/rag/azure-rag-processing.png" alt-text="Screenshot depicting semantic chunking with RAG using Azure AI Document Intelligence.":::
Long sentences are challenging for natural language processing (NLP) application
Text data chunking strategies play a key role in optimizing the RAG response and performance. Fixed-sized and semantic are two distinct chunking methods:
-* **Fixed-sized chunking**. Most chunking strategies used in RAG today are based on fix-sized text segments known as chunks. Fixed-sized chunking is quick, easy, and effective with text that doesn't have a strong semantic structure such as logs and data. However it isn't recommended for text that requires semantic understanding and precise context. The fixed-size nature of the window can result in severing words, sentences, or paragraphs impeding comprehension and disrupt the flow of information and understanding.
+* **Fixed-sized chunking**. Most chunking strategies used in RAG today are based on fix-sized text segments known as chunks. Fixed-sized chunking is quick, easy, and effective with text that doesn't have a strong semantic structure such as logs and data. However it isn't recommended for text that requires semantic understanding and precise context. The fixed-size nature of the window can result in severing words, sentences, or paragraphs impeding comprehension and disrupting the flow of information and understanding.
-* **Semantic chunking**. This method divides the text into chunks based on semantic understanding. Division boundaries are focused on sentence subject and use significant computational algorithmically complex resources. However, it has the distinct advantage of maintaining semantic consistency within each chunk. It's useful for text summarization, sentiment analysis, and document classification tasks.
+* **Semantic chunking**. This method divides the text into chunks based on semantic understanding. Division boundaries are focused on sentence subject and use significant computational algorithmically-complex resources. However, it has the distinct advantage of maintaining semantic consistency within each chunk. It's useful for text summarization, sentiment analysis, and document classification tasks.
## Semantic chunking with Document Intelligence Layout model
Markdown is a structured and formatted markup language and a popular input for e
* **Simplified processing**. You can parse different document types, such as digital and scanned PDFs, images, office files (docx, xlsx, pptx), and HTML, with just a single API call.
-* **Scalability and AI quality**. The Layout model is highly scalable in Optical Character Recognition (OCR), table extraction, and [document structure analysis](concept-layout.md#document-layout-analysis). It supports [309 printed and 12 handwritten languages](language-support-ocr.md#model-id-prebuilt-layout) further ensuring high-quality results driven by AI capabilities.
+* **Scalability and AI quality**. The Layout model is highly scalable in Optical Character Recognition (OCR), table extraction, and [document structure analysis](concept-layout.md#document-layout-analysis). It supports [309 printed and 12 handwritten languages](language-support-ocr.md#model-id-prebuilt-layout), further ensuring high-quality results driven by AI capabilities.
* **Large learning model (LLM) compatibility**. The Layout model Markdown formatted output is LLM friendly and facilitates seamless integration into your workflows. You can turn any table in a document into Markdown format and avoid extensive effort parsing the documents for greater LLM understanding.
-**Text image processed with Document Intelligence Studio and output to markdown using Layout model**
+**Text image processed with Document Intelligence Studio and output to MarkDown using Layout model**
:::image type="content" source="media/rag/markdown-text-output.png" alt-text="Screenshot of newspaper article processed by Layout model and outputted to Markdown.":::
You can follow the [Document Intelligence Studio quickstart](quickstarts/try-doc
* [Azure OpenAI on your data](../openai/concepts/use-your-data.md) enables you to run supported chat on your documents. Azure OpenAI on your data applies the Document Intelligence Layout model to extract and parse document data by chunking long text based on tables and paragraphs. You can also customize your chunking strategy using [Azure OpenAI sample scripts](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) located in our GitHub repo.
-* Azure AI Document Intelligence is now integrated with [LangChain](https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence) as one of its document loaders. You can use it to easily load the data and output to Markdown format. This [notebook](https://github.com/microsoft/Form-Recognizer-Toolkit/blob/main/SampleCode/Python/sample_rag_langchain.ipynb) shows a simple demo for RAG pattern with Azure AI Document Intelligence as document loader and Azure Search as retriever in LangChain.
+* Azure AI Document Intelligence is now integrated with [LangChain](https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence) as one of its document loaders. You can use it to easily load the data and output to Markdown format. For more information, see our [sample code ](https://github.com/microsoft/Form-Recognizer-Toolkit/blob/main/SampleCode/Python/sample_rag_langchain.ipynb) that shows a simple demo for RAG pattern with Azure AI Document Intelligence as document loader and Azure Search as retriever in LangChain.
* The chat with your data solution accelerator [code sample](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) demonstrates an end-to-end baseline RAG pattern sample. It uses Azure AI Search as a retriever and Azure AI Document Intelligence for document loading and semantic chunking.
ai-services Gpt V Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/gpt-v-quickstart.md
description: Use this article to get started using Azure OpenAI to deploy and us
-+
zone_pivot_groups: openai-quickstart-gpt-v
* Learn more about these APIs in the [GPT-4 Turbo with Vision how-to guide](./gpt-v-quickstart.md) * [GPT-4 Turbo with Vision frequently asked questions](./faq.yml#gpt-4-turbo-with-vision)
-* [GPT-4 Turbo with Vision API reference](https://aka.ms/gpt-v-api-ref)
+* [GPT-4 Turbo with Vision API reference](https://aka.ms/gpt-v-api-ref)
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
zone_pivot_groups: "openai-embeddings" recommendations: false--+ # Tutorial: Explore Azure OpenAI Service embeddings and document search
ai-services Use Rest Api Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/use-rest-api-programmatically.md
Previously updated : 07/18/2023 Last updated : 01/08/2024 recommendations: false ms.devlang: csharp, golang, java, javascript, python
To get started, you need:
1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
- 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+ 1. **Name**. Enter the name you chose for your resource. The name you choose must be unique within Azure.
> [!NOTE] > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
To get started, you need:
1. Review the service terms and select **Create** to deploy your resource.
- 1. After your resource has successfully deployed, select **Go to resource**.
+ 1. After your resource successfully deploys, select **Go to resource**.
### Retrieve your key and custom domain endpoint *Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
To get started, you need:
Requests to the Translator service require a read-only key for authenticating access.
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
1. In the left rail, under *Resource Management*, select **Keys and Endpoint**. 1. Copy and paste your key in a convenient location, such as *Microsoft Notepad*. 1. You paste it into the code sample to authenticate your request to the Document Translation service.
You need to [**create containers**](../../../../storage/blobs/storage-quickstar
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](create-sas-tokens.md).
-* Your **source** container or blob must have designated **read** and **list** access.
-* Your **target** container or blob must have designated **write** and **list** access.
-* Your **glossary** blob must have designated **read** and **list** access.
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+* Your **glossary** blob must designate **read** and **list** access.
> [!TIP] >
The following headers are included with each Document Translation API request:
"inputs": [ { "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ "sourceUrl": "{sourceSASUrl}"
}, "targets": [ {
- "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "targetUrl": "{targetSASUrl}",
"language": "fr" } ]
The following headers are included with each Document Translation API request:
### Translate a specific document in a container * Specify `"storageType": "File"`
-* If you aren't using a [**system-assigned managed identity**](create-use-managed-identities.md) for authentication, make sure you've created source URL & SAS token for the specific blob/document (not for the container)
-* Ensure you've specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* If you aren't using a [**system-assigned managed identity**](create-use-managed-identities.md) for authentication, make sure you created source URL & SAS tokens for the specific blob/document (not for the container)
+* Ensure you specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
* This sample request returns a single document translated into two target languages ```json
The following headers are included with each Document Translation API request:
{ "storageType": "File", "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
+ "sourceUrl": "{sourceSASUrl}"
}, "targets": [ {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "targetUrl": "{targetSASUrl}",
"language": "es" }, {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "targetUrl": "{targetSASUrl}",
"language": "de" } ]
The following headers are included with each Document Translation API request:
"inputs": [ { "source": {
- "sourceUrl": "https://myblob.blob.core.windows.net/source"
+ "sourceUrl": "{sourceSASUrl}"
}, "targets": [ {
- "targetUrl": "https://myblob.blob.core.windows.net/target",
+ "targetUrl": "{targetSASUrl}",
"language": "es", "glossaries": [ {
- "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf",
+ "glossaryUrl": "{glossaryUrl/en-es.xlf}",
"format": "xliff" } ]
func main() {
### Brief overview
-Cancel currently processing or queued job. Only documents for which translation hasn't started are canceled.
+Cancel currently processing or queued job. Only documents for which translation isn't started are canceled.
### [C#](#tab/csharp)
func main() {
| 200 | OK | The request was successful. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. | | 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. When managing your subscription on the Azure portal, make sure you're using the **Translator** single-service resource _not_ the **Azure AI services** multi-service resource.
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+| 429 | Too Many Requests | You exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. Can also indicate invalid headers. |
## Learn more
ai-studio Ai Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md
Title: Azure AI resource concepts description: This article introduces concepts about Azure AI resources.-
Last updated 12/14/2023 + # Azure AI resources
ai-studio Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/connections.md
Title: Connections in Azure AI Studio description: This article introduces connections in Azure AI Studio-
Last updated 11/15/2023 + # Connections in Azure AI Studio
ai-studio Content Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/content-filtering.md
Title: Azure AI Studio content filtering description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI Studio.-
Last updated 11/15/2023 + # Content filtering in Azure AI Studio
ai-studio Deployments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/deployments-overview.md
Title: Deploy models, flows, and web apps with Azure AI Studio description: Learn about deploying models, flows, and web apps with Azure AI Studio.-
Last updated 12/7/2023 + # Overview: Deploy models, flows, and web apps with Azure AI Studio
ai-studio Evaluation Approach Gen Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-approach-gen-ai.md
Title: Evaluation of generative AI applications with Azure AI Studio description: Explore the broader domain of monitoring and evaluating large language models through the establishment of precise metrics, the development of test sets for measurement, and the implementation of iterative testing.-
Last updated 11/15/2023 + # Evaluation of generative AI applications
ai-studio Evaluation Improvement Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-improvement-strategies.md
Title: Harms mitigation strategies with Azure AI description: Explore various strategies for addressing the challenges posed by large language models and mitigating potential harms.-
Last updated 11/15/2023 + # Harms mitigation strategies with Azure AI
ai-studio Evaluation Metrics Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md
Title: Evaluation and monitoring metrics for generative AI description: Discover the supported built-in metrics for evaluating large language models, understand their application and usage, and learn how to interpret them effectively.-
Last updated 11/15/2023 + # Evaluation and monitoring metrics for generative AI
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
Title: Role-based access control in Azure AI Studio description: This article introduces role-based access control in Azure AI Studio-
Last updated 11/15/2023 + # Role-based access control in Azure AI Studio
ai-studio Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/retrieval-augmented-generation.md
Title: Retrieval augmented generation in Azure AI Studio description: This article introduces retrieval augmented generation for use in generative AI applications.-
Last updated 11/15/2023 + # Retrieval augmented generation and indexes
ai-studio Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/autoscale.md
Title: Autoscale Azure AI limits description: Learn how you can manage and increase quotas for resources with Azure AI Studio.-
Last updated 11/15/2023 + # Autoscale Azure AI limits
ai-studio Cli Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/cli-install.md
Title: Get started with the Azure AI CLI description: This article provides instructions on how to install and get started with the Azure AI CLI.-
Last updated 11/15/2023 + # Get started with the Azure AI CLI
ai-studio Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/commitment-tier.md
Title: Commitment tier pricing for Azure AI description: Learn how to sign up for commitment tier pricing instead of pay-as-you-go pricing.-
Last updated 11/15/2023 + # Commitment tier pricing for Azure AI
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
Title: How to configure a managed network for Azure AI description: Learn how to configure a managed network for Azure AI-
Last updated 11/15/2023 + # How to configure a managed network for Azure AI
ai-studio Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md
Title: How to configure a private link for Azure AI description: Learn how to configure a private link for Azure AI-
Last updated 11/15/2023 + # How to configure a private link for Azure AI
ai-studio Connections Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md
Title: How to add a new connection in Azure AI Studio description: Learn how to add a new connection in Azure AI Studio-
Last updated 11/15/2023 + # How to add a new connection in Azure AI Studio
ai-studio Costs Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md
Title: Plan and manage costs for Azure AI Studio description: Learn how to plan for and manage costs for Azure AI Studio by using cost analysis in the Azure portal.-
Last updated 11/15/2023 + # Plan and manage costs for Azure AI Studio
ai-studio Create Azure Ai Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md
Title: How to create and manage an Azure AI resource description: This article describes how to create and manage an Azure AI resource-
Last updated 11/15/2023 + # How to create and manage an Azure AI resource
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
Title: How to create and manage compute instances in Azure AI Studio description: This article provides instructions on how to create and manage compute instances in Azure AI Studio.-
Last updated 11/15/2023 + # How to create and manage compute instances in Azure AI Studio
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
Title: How to create and manage prompt flow runtimes description: Learn how to create and manage prompt flow runtimes in Azure AI Studio.-
Last updated 11/15/2023 + # How to create and manage prompt flow runtimes in Azure AI Studio
ai-studio Create Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md
Title: Create an Azure AI project in Azure AI Studio description: This article describes how to create an Azure AI Studio project.-
Last updated 11/15/2023 + # Create an Azure AI project in Azure AI Studio
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
Title: How to add and manage data in your Azure AI project description: Learn how to add and manage data in your Azure AI project-
Last updated 11/15/2023 + # How to add and manage data in your Azure AI project
ai-studio Data Image Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md
Title: 'Use your image data with Azure OpenAI Service' description: Use this article to learn about using your image data for image generation in Azure AI Studio.- Last updated 12/11/2023 + # Azure OpenAI on your data with images using GPT-4 Turbo with Vision (preview)
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
Title: How to deploy Llama 2 family of large language models with Azure AI Studio description: Learn how to deploy Llama 2 family of large language models with Azure AI Studio.- Last updated 12/11/2023 +
ai-studio Deploy Models Open https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-open.md
Title: How to deploy open models with Azure AI Studio description: Learn how to deploy open models with Azure AI Studio.- Last updated 12/11/2023 + # How to deploy large language models with Azure AI Studio
ai-studio Deploy Models Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-openai.md
Title: How to deploy Azure OpenAI models with Azure AI Studio description: Learn how to deploy Azure OpenAI models with Azure AI Studio.-
Last updated 12/11/2023 + # How to deploy Azure OpenAI models with Azure AI Studio
ai-studio Evaluate Flow Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-flow-results.md
Title: How to view evaluation results in Azure AI Studio description: This article provides instructions on how to view evaluation results in Azure AI Studio.-
Last updated 11/15/2023 + # How to view evaluation results in Azure AI Studio
ai-studio Evaluate Generative Ai App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-generative-ai-app.md
Title: How to evaluate with Azure AI Studio and SDK description: Evaluate your generative AI application with Azure AI Studio UI and SDK.-
Last updated 11/15/2023 + zone_pivot_groups: azure-ai-studio-sdk
ai-studio Evaluate Prompts Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-prompts-playground.md
Title: How to manually evaluate prompts in Azure AI Studio playground description: Quickly test and evaluate prompts in Azure AI Studio playground.-
Last updated 11/15/2023 + # Manually evaluate prompts in Azure AI Studio playground
ai-studio Fine Tune Model Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md
Title: Fine-tune a Llama 2 model in Azure AI Studio description: Learn how to fine-tune a Llama 2 model in Azure AI Studio.- Last updated 12/11/2023 +
ai-studio Flow Bulk Test Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-bulk-test-evaluation.md
Title: Submit batch run and evaluate a flow description: Learn how to submit batch run and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure AI Studio.-
Last updated 11/15/2023 + # Submit a batch run and evaluate a flow
ai-studio Flow Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md
Title: Deploy a flow as a managed online endpoint for real-time inference description: Learn how to deploy a flow as a managed online endpoint for real-time inference with Azure AI Studio.-
Last updated 11/15/2023 + # Deploy a flow for real-time inference
ai-studio Flow Develop Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-develop-evaluation.md
Title: Develop an evaluation flow description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method in prompt flow with Azure AI Studio.-
Last updated 11/15/2023 + # Develop an evaluation flow in Azure AI Studio
ai-studio Flow Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-develop.md
Title: How to build with prompt flow description: This article provides instructions on how to build with prompt flow.-
Last updated 11/15/2023 + # Develop a prompt flow
ai-studio Flow Tune Prompts Using Variants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-tune-prompts-using-variants.md
Title: Tune prompts using variants description: Learn how to tune prompts using variants in Prompt flow with Azure AI Studio.-
Last updated 11/15/2023 + # Tune prompts using variants in Azure AI Studio
ai-studio Generate Data Qa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/generate-data-qa.md
Title: How to generate question and answer pairs from your source dataset description: This article provides instructions on how to generate question and answer pairs from your source dataset.-
Last updated 11/15/2023 + # How to generate question and answer pairs from your source dataset
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
Title: How to create vector indexes description: Learn how to create and use a vector index for performing Retrieval Augmented Generation (RAG).-
Last updated 11/15/2023 + # How to create a vector index
ai-studio Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog.md
Title: Explore the model catalog in Azure AI Studio description: This article introduces foundation model capabilities and the model catalog in Azure AI Studio.-
Last updated 11/15/2023 + # Explore the model catalog in Azure AI Studio
ai-studio Models Foundation Azure Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/models-foundation-azure-ai.md
Title: Explore Azure AI capabilities in Azure AI Studio description: This article introduces Azure AI capabilities in Azure AI Studio.-
Last updated 11/15/2023 + # Explore Azure AI capabilities in Azure AI Studio
ai-studio Monitor Quality Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/monitor-quality-safety.md
Title: Monitor quality and safety of deployed applications description: Learn how to monitor quality and safety of deployed applications with Azure AI Studio.-
Last updated 11/15/2023 + # Monitor quality and safety of deployed applications
ai-studio Azure Open Ai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
Title: Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Studio description: This article introduces the Azure OpenAI GPT-4 Turbo with Vision tool for flows in Azure AI Studio.+ ---- Previously updated : 01/02/2024+ Last updated : 1/8/2024++++
-# Azure OpenAI GPT-4 Turbo with Vision tool (preview) in Azure AI studio
+# Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Studio
-Azure OpenAI GPT-4 Turbo with Vision tool enables you to leverage your AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them.
+
+The prompt flow *Azure OpenAI GPT-4 Turbo with Vision* tool enables you to use your Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them.
## Prerequisites -- Create a GPT-4 Turbo with Vision deployment
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
+- Access granted to Azure OpenAI in the desired Azure subscription.
+
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- In AI studio, select **Deployments** from the left navigation pane and create a deployment by selecting model name: `gpt-4v`.
+- An [Azure AI resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
## Connection
-Setup connections to provisioned resources in prompt flow.
+Set up connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version | |-|-|-|-|-|
Setup connections to provisioned resources in prompt flow.
| Name | Type | Description | Required | ||-||-|
-| connection | AzureOpenAI | the AzureOpenAI connection to be used in the tool | Yes |
-| deployment\_name | string | the language model to use | Yes |
-| prompt | string | The text prompt that the language model will use to generate its response. | Yes |
-| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
-| temperature | float | the randomness of the generated text. Default is 1. | No |
-| stop | list | the stopping sequence for the generated text. Default is null. | No |
-| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
-| presence\_penalty | float | value that controls the model's behavior with regard to repeating phrases. Default is 0. | No |
-| frequency\_penalty | float | value that controls the model's behavior with regard to generating rare phrases. Default is 0. | No |
+| connection | AzureOpenAI | The Azure OpenAI connection to be used in the tool. | Yes |
+| deployment\_name | string | The language model to use. | Yes |
+| prompt | string | Text prompt that the language model uses to generate its response. | Yes |
+| max\_tokens | integer | Maximum number of tokens to generate in the response. Default is 512. | No |
+| temperature | float | Randomness of the generated text. Default is 1. | No |
+| stop | list | Stopping sequence for the generated text. Default is null. | No |
+| top_p | float | Probability of using the top choice from the generated tokens. Default is 1. | No |
+| presence\_penalty | float | Value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
+| frequency\_penalty | float | Value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
## Outputs
ai-studio Content Safety Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md
Title: Content Safety tool for flows in Azure AI Studio description: This article introduces the Content Safety tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Content safety tool for flows in Azure AI Studio
ai-studio Embedding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/embedding-tool.md
Title: Embedding tool for flows in Azure AI Studio description: This article introduces the Embedding tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Embedding tool for flows in Azure AI Studio
ai-studio Faiss Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/faiss-index-lookup-tool.md
Title: Faiss Index Lookup tool for flows in Azure AI Studio description: This article introduces the Faiss Index Lookup tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Faiss Index Lookup tool for flows in Azure AI Studio
ai-studio Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md
Title: LLM tool for flows in Azure AI Studio description: This article introduces the LLM tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # LLM tool for flows in Azure AI Studio
ai-studio Prompt Flow Tools Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
Title: Overview of prompt flow tools in Azure AI Studio description: Learn about prompt flow tools that are available in Azure AI Studio.- Last updated 12/6/2023--+++ # Overview of prompt flow tools in Azure AI Studio
The following table provides an index of tools in prompt flow. If existing tools
| [LLM](./llm-tool.md) | Use Azure Open AI large language models (LLM) for tasks such as text completion or chat. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Python](./python-tool.md) | Run Python code. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Azure OpenAI GPT-4 Turbo with Vision (preview)](./azure-open-ai-gpt-4v-tool.md) | Use AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
| [Vector Index Lookup](./vector-index-lookup-tool.md) | Search text or a vector-based query from a vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) | | [Content Safety (Text)](./content-safety-tool.md) | Use Azure AI Content Safety to detect harmful content. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Faiss Index Lookup](./faiss-index-lookup-tool.md) | Search a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
ai-studio Prompt Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-tool.md
Title: Prompt tool for flows in Azure AI Studio description: This article introduces the Prompt tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Prompt tool for flows in Azure AI Studio
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
Title: Python tool for flows in Azure AI Studio description: This article introduces the Python tool for flows in Azure AI Studio.- -+ Last updated 11/15/2023--+++ # Python tool for flows in Azure AI Studio
ai-studio Serp Api Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md
Title: Serp API tool for flows in Azure AI Studio description: This article introduces the Serp API tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Serp API tool for flows in Azure AI Studio
ai-studio Vector Db Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-db-lookup-tool.md
Title: Vector DB Lookup tool for flows in Azure AI Studio description: This article introduces the Vector DB Lookup tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Vector DB Lookup tool for flows in Azure AI Studio
ai-studio Vector Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-index-lookup-tool.md
Title: Vector index lookup tool for flows in Azure AI Studio description: This article introduces the Vector index lookup tool for flows in Azure AI Studio.- - ignite-2023-+ Last updated 11/15/2023--+++ # Vector index lookup tool for flows in Azure AI Studio
ai-studio Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow.md
Title: Prompt flow in Azure AI Studio description: This article introduces prompt flow in Azure AI Studio.-
Last updated 11/15/2023 + # Prompt flow in Azure AI Studio
ai-studio Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md
Title: Manage and increase quotas for resources with Azure AI Studio description: This article provides instructions on how to manage and increase quotas for resources with Azure AI Studio.-
Last updated 11/15/2023 + # Manage and increase quotas for resources with Azure AI Studio
ai-studio Sdk Generative Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-generative-overview.md
Title: Overview of the Azure AI Generative SDK packages description: This article provides overview of the Azure AI Generative SDK packages.- Last updated 12/15/2023 + # Overview of the Azure AI Generative SDK packages
ai-studio Sdk Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-install.md
Title: How to get started with the Azure AI SDK description: This article provides instructions on how to get started with the Azure AI SDK.-
Last updated 11/15/2023 + # How to get started with the Azure AI SDK
ai-studio Simulator Interaction Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/simulator-interaction-data.md
Title: How to use the Azure AI simulator for interaction data description: This article provides instructions on how to use the Azure AI simulator for interaction data.-
Last updated 11/15/2023 + # Generate AI-simulated datasets with your application
ai-studio Troubleshoot Deploy And Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md
Title: How to troubleshoot your deployments and monitors in Azure AI Studio description: This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Studio.-
Last updated 11/15/2023 + # How to troubleshoot your deployments and monitors in Azure AI Studio
ai-studio Vscode Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/vscode-web.md
Title: Get started with Azure AI projects in VS Code (Web) description: This article provides instructions on how to get started with Azure AI projects in VS Code (Web).-
Last updated 11/15/2023 + # Get started with Azure AI projects in VS Code (Web)
ai-studio Content Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md
Title: Moderate text and images with content safety in Azure AI Studio description: Use this article to moderate text and images with content safety in Azure AI Studio.-
Last updated 11/15/2023 + # QuickStart: Moderate text and images with content safety in Azure AI Studio
ai-studio Hear Speak Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md
Title: Hear and speak with chat models in the Azure AI Studio playground description: Hear and speak with chat models in the Azure AI Studio playground.-
Last updated 11/15/2023 + # Quickstart: Hear and speak with chat models in the Azure AI Studio playground
ai-studio Multimodal Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md
Title: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio description: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio.- Last updated 12/11/2023 + # Quickstart: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio
ai-studio Playground Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md
Title: Generate product name ideas in the Azure AI Studio playground description: Use this article to generate product name ideas in the Azure AI Studio playground.-
Last updated 11/15/2023 + # Quickstart: Generate product name ideas in the Azure AI Studio playground
ai-studio Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/region-support.md
Title: Azure AI Studio feature availability across clouds regions description: This article lists Azure AI Studio feature availability across clouds regions.- Last updated 12/11/2023-+ +
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
Title: Deploy a web app for chat on your data in the Azure AI Studio playground description: Use this article to deploy a web app for chat on your data in the Azure AI Studio playground.-
Last updated 11/15/2023 + # Tutorial: Deploy a web app for chat on your data
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
Title: Build and deploy a question and answer copilot with prompt flow in Azure AI Studio description: Use this article to build and deploy a question and answer copilot with prompt flow in Azure AI Studio- Last updated 11/15/2023 + # Tutorial: Build and deploy a question and answer copilot with prompt flow in Azure AI Studio
ai-studio Screen Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md
Title: Using Azure AI Studio with a screen reader description: This tutorial guides you through using Azure AI Studio with a screen reader.-
Last updated 11/15/2023 + # Tutorial: Using Azure AI Studio with a screen reader
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
Title: What is AI Studio? description: Azure AI Studio brings together capabilities from across multiple Azure AI services. You can build generative AI applications on an enterprise-grade platform.- keywords: Azure AI services, cognitive
Last updated 11/15/2023 +
aks Access Control Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-control-managed-azure-ad.md
# Cluster access control with AKS-managed Microsoft Entra integration
-When you integrate Microsoft Entra ID with your AKS cluster, you can use [Conditional Access][aad-conditional-access] or Privileged Identity Management (PIM) for just-in-time requests to control access to your cluster. This article shows you how to enable Conditional Access and PIM on your AKS clusters.
+When you integrate Microsoft Entra ID with your AKS cluster, you can use [Conditional Access][aad-conditional-access] or [Privileged Identity Management (PIM)][pim-configure] for just-in-time requests to control access to your cluster. This article shows you how to enable Conditional Access and PIM on your AKS clusters.
> [!NOTE]
-> Microsoft Entra Conditional Access and Privileged Identity Management are Microsoft Entra ID P1 or P2 capabilities requiring a Premium P2 SKU. For more on Microsoft Entra ID SKUs, see the [pricing guide][aad-pricing].
+> Microsoft Entra Conditional Access and Privileged Identity Management (PIM) are Microsoft Entra ID P1, P2 or Governance capabilities requiring a Premium P2 SKU. For more on Microsoft Entra ID licenses and SKUs, see [Microsoft Entra ID Governance licensing fundamentals][licensing-fundamentals] and [pricing guide][aad-pricing].
## Before you begin
Make sure the admin of the security group has given your account an *Active* ass
<!-- LINKS - Internal --> [aad-conditional-access]: ../active-directory/conditional-access/overview.md
+[pim-configure]: /entra/id-governance/privileged-identity-management/pim-configure
+[licensing-fundamentals]: /entra/id-governance/licensing-fundamentals
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [aad-assignments]: ../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
AKSDEV_ID=$(az ad user create \
--display-name "AKS Dev" \ --user-principal-name $AAD_DEV_UPN \ --password $AAD_DEV_PW \
- --query objectId -o tsv)
+ --query id -o tsv)
``` 2. Add the user to the *appdev* group created in the previous section using the [`az ad group member add`][az-ad-group-member-add] command.
AKSSRE_ID=$(az ad user create \
--display-name "AKS SRE" \ --user-principal-name $AAD_SRE_UPN \ --password $AAD_SRE_PW \
- --query objectId -o tsv)
+ --query id -o tsv)
# Add the user to the opssre Azure AD group az ad group member add --group opssre --member-id $AKSSRE_ID
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
When you create an Azure disk for use with AKS, you can create the disk resource
volumeMounts: - name: azure mountPath: /mnt/azure
- volumeMounts
volumes: - name: azure persistentVolumeClaim:
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
A persistent volume claim (PVC) uses the storage class object to dynamically pro
```output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
+ my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 100Gi RWX my-azurefile 5m
``` ### Use the persistent volume
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
This article introduces the core concepts that provide networking to your applic
* [Network policies](#network-policies) ## Kubernetes basics
+Kubernetes employs a virtual networking layer to manage access within and between your applications or their components. This involves the following key aspects:
-To allow access to your applications or between application components, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes connect to a virtual network, providing inbound and outbound connectivity for pods. The *kube-proxy* component runs on each node to provide these network features.
+- **Kubernetes nodes and virtual network**: Kubernetes nodes are connected to a virtual network. This setup enables pods (basic units of deployment in Kubernetes) to have both inbound and outbound connectivity.
-In Kubernetes:
+- **Kube-proxy component**: Running on each node, kube-proxy is responsible for providing the necessary network features.
-* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
-* *ServiceTypes* allow you to specify what kind of Service you want.
-* You can distribute traffic using a *load balancer*.
-* Layer 7 routing of application traffic can also be achieved with *ingress controllers*.
-* You can *control outbound (egress) traffic* for cluster nodes.
-* Security and filtering of the network traffic for pods is possible with *network policies*.
+Regarding specific Kubernetes functionalities:
-The Azure platform also simplifies virtual networking for AKS clusters. When you create a Kubernetes load balancer, you also create and configure the underlying Azure load balancer resource. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new Ingress routes are configured.
+- **Services**: These are used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
+- **Service types**: This feature lets you specify the kind of Service you wish to create.
+- **Load balancer**: You can use a load balancer to distribute network traffic evenly across various resources.
+- **Ingress controllers**: These facilitate Layer 7 routing, which is essential for directing application traffic.
+- **Egress traffic control**: Kubernetes allows you to manage and control outbound traffic from cluster nodes.
+- **Network policies**: These policies enable security measures and filtering for network traffic in pods.
+
+In the context of the Azure platform:
+
+- Azure streamlines virtual networking for AKS (Azure Kubernetes Service) clusters.
+- Creating a Kubernetes load balancer on Azure simultaneously sets up the corresponding Azure load balancer resource.
+- As you open network ports to pods, Azure automatically configures the necessary network security group rules.
+- Azure can also manage external DNS configurations for HTTP application routing as new Ingress routes are established.
## Services
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
A service mesh deploys extra containers for communication, typically in a [sidec
Sending and storing all logs from all possible sources (workloads, services, diagnostics, and platform activity) can increase storage and network traffic, which impacts costs and carbon emissions.
-* Make sure you're collecting and retaining only the necessary log data to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-agent-config.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
+* Make sure you're collecting and retaining only the necessary log data to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-data-collection-configmap.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
### Cache static data
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Previously updated : 10/22/2023 Last updated : 01/05/2024 # Use Image Cleaner to clean up stale images on your Azure Kubernetes Service (AKS) cluster
kubectl logs -n kube-system <worker-pod-name> -c trivy-scanner
kubectl logs -n kube-system <worker-pod-name> -c remover ``` -- when `eraser-aks-xxxxx` was deleted, you can follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table to view historical pod logs.
+- `eraser-aks-xxxxx` pod will be deleted in 10 minutes after work completion. You can follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table. After that, historical logs will be stored and you can review them even `eraser-aks-xxxxx` is deleted
+
1. Ensure Azure Monitoring is enabled on your cluster. For detailed steps, see [Enable Container Insights on AKS clusters](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster). 2. Get the Log Analytics resource ID using the [`az aks show`][az-aks-show] command.
aks Kubelogin Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelogin-authentication.md
Title: Using Kubelogin with Azure Kubernetes Service (AKS) description: Learn about using Kubelogin to enable all of the supported Azure Active Directory authentication methods with Azure Kubernetes Service (AKS). + Last updated 11/28/2023- # Use Kubelogin with Azure Kubernetes Service (AKS)
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
Title: Manage SSH access on Azure Kubernetes Service cluster nodes
description: Learn how to configure SSH on Azure Kubernetes Service (AKS) cluster nodes. + Last updated 12/15/2023
To help troubleshoot any issues with SSH connectivity to your clusters nodes, yo
[view-master-logs]: monitor-aks-reference.md#resource-logs [node-image-upgrade]: node-image-upgrade.md [az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az-aks-nodepool-upgrade
-[network-security-group-rules-overview]: concepts-security.md#azure-network-security-groups
+[network-security-group-rules-overview]: concepts-security.md#azure-network-security-groups
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
AKS generates the same kinds of monitoring data as other Azure resources that ar
| Source | Description | |:|:| | Platform metrics | [Platform metrics](monitor-aks-reference.md#metrics) are automatically collected for AKS clusters at no cost. You can analyze these metrics with [metrics explorer](../azure-monitor/essentials/analyze-metrics.md) or use them for [metric alerts](../azure-monitor/alerts/alerts-types.md#metric-alerts). |
-| Prometheus metrics | When you [enable metric scraping](../azure-monitor/containers/prometheus-metrics-enable.md) for your cluster, [Prometheus metrics](../azure-monitor/containers/prometheus-metrics-scrape-default.md) are collected by [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) and stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). Analyze them with [prebuilt dashboards](../azure-monitor/visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in [Azure Managed Grafana](../managed-grafan). |
+| Prometheus metrics | When you [enable metric scraping](../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) for your cluster, [Prometheus metrics](../azure-monitor/containers/prometheus-metrics-scrape-default.md) are collected by [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) and stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). Analyze them with [prebuilt dashboards](../azure-monitor/visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in [Azure Managed Grafana](../managed-grafan). |
| Activity logs | [Activity log](monitor-aks-reference.md) is collected automatically for AKS clusters at no cost. These logs track information such as when a cluster is created or has a configuration change. Send the [Activity log to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with your other log data. | | Resource logs | Control plane logs for AKS are implemented as resource logs. [Create a diagnostic setting](#aks-control-planeresource-logs) to send them to [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) where you can analyze and alert on them with log queries in [Log Analytics](../azure-monitor/logs/log-analytics-overview.md). | | Container insights | Container insights collects various logs and performance data from a cluster including stdout/stderr streams and stores them in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) and [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Analyze this data with views and workbooks included with Container insights or with [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) and [metrics explorer](../azure-monitor/essentials/analyze-metrics.md). |
The following Azure services and features of Azure Monitor can be used for extra
| Service / Feature | Description | |:|:|
-| [Container insights](../azure-monitor/containers/container-insights-overview.md) | Uses a containerized version of the [Azure Monitor agent](../azure-monitor/agents/agents-overview.md) to collect stdout/stderr logs, and Kubernetes events from each node in your cluster, supporting a [variety of monitoring scenarios for AKS clusters](../azure-monitor/containers/container-insights-overview.md#features-of-container-insights). You can enable monitoring for an AKS cluster when it's created by using [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure Policy](../azure-monitor/containers/container-insights-enable-aks-policy.md), Azure portal or Terraform. If you don't enable Container insights when you create your cluster, see [Enable Container insights for Azure Kubernetes Service (AKS) cluster](../azure-monitor/containers/container-insights-enable-aks.md) for other options to enable it.<br><br>Container insights store most of its data in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md), and you'll typically use the same log analytics workspace as the [resource logs](monitor-aks-reference.md#resource-logs) for your cluster. See [Design a Log Analytics workspace architecture](../azure-monitor/logs/workspace-design.md) for guidance on how many workspaces you should use and where to locate them. |
-| [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io/) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible monitoring solution in Azure. If you don't enable managed Prometheus when you create your cluster, see [Collect Prometheus metrics from an AKS cluster](../azure-monitor/essentials/prometheus-metrics-enable.md) for other options to enable it.<br><br>Azure Monitor managed service for Prometheus stores its data in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md), which is [linked to a Grafana workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace) so that you can analyze the data with Azure Managed Grafana. |
+| [Container insights](../azure-monitor/containers/container-insights-overview.md) | Uses a containerized version of the [Azure Monitor agent](../azure-monitor/agents/agents-overview.md) to collect stdout/stderr logs, and Kubernetes events from each node in your cluster, supporting a [variety of monitoring scenarios for AKS clusters](../azure-monitor/containers/container-insights-overview.md). You can enable monitoring for an AKS cluster when it's created by using [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure Policy](../azure-monitor/containers/container-insights-enable-aks-policy.md), Azure portal or Terraform. If you don't enable Container insights when you create your cluster, see [Enable Container insights for Azure Kubernetes Service (AKS) cluster](../azure-monitor/containers/container-insights-enable-aks.md) for other options to enable it.<br><br>Container insights store most of its data in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md), and you'll typically use the same log analytics workspace as the [resource logs](monitor-aks-reference.md#resource-logs) for your cluster. See [Design a Log Analytics workspace architecture](../azure-monitor/logs/workspace-design.md) for guidance on how many workspaces you should use and where to locate them. |
+| [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io/) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible monitoring solution in Azure. If you don't enable managed Prometheus when you create your cluster, see [Collect Prometheus metrics from an AKS cluster](../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) for other options to enable it.<br><br>Azure Monitor managed service for Prometheus stores its data in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md), which is [linked to a Grafana workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace) so that you can analyze the data with Azure Managed Grafana. |
| [Azure Managed Grafana](../managed-grafan#link-a-grafana-workspace) details on linking it to your Azure Monitor workspace so it can access Prometheus metrics for your cluster. | ## Metrics
-Metrics play an important role in cluster monitoring, identifying issues, and optimizing performance in the AKS clusters. Platform metrics are captured using the out of the box metrics server installed in kube-system namespace, which periodically scrapes metrics from all Kubernetes nodes served by Kubelet. You should also enable Azure Managed Prometheus metrics to collect container metrics and Kubernetes object metrics, such as object state of Deployments. See [Collect Prometheus metrics from an AKS cluster](../azure-monitor/containers/prometheus-metrics-enable.md) to send data to Azure Managed service for Prometheus.
+Metrics play an important role in cluster monitoring, identifying issues, and optimizing performance in the AKS clusters. Platform metrics are captured using the out of the box metrics server installed in kube-system namespace, which periodically scrapes metrics from all Kubernetes nodes served by Kubelet. You should also enable Azure Managed Prometheus metrics to collect container metrics and Kubernetes object metrics, such as object state of Deployments. See [Collect Prometheus metrics from an AKS cluster](../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) to send data to Azure Managed service for Prometheus.
:::image type="content" source="media/monitor-aks/prometheus.png" alt-text="Screenshot of enabling Managed Prometheus for existing cluster." lightbox="media/monitor-aks/prometheus.png":::
Azure Monitor Container Insights provides a schema for container logs known as C
- PodNamespace In addition, this schema is compatible with [Basic Logs](../azure-monitor/logs/basic-logs-configure.md?tabs=portal-1#set-a-tables-log-data-plan) data plan, which offers a low-cost alternative to standard analytics logs. The Basic log data plan lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts. For more information, see [Manage tables in a Log Analytics workspace](../azure-monitor/logs/manage-logs-tables.md?tabs=azure-portal).
-ContainerLogV2 is the recommended approach and is the default schema for customers onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy, and Azure portal. For more information about how to enable ContainerLogV2 through either the cluster's Data Collection Rule (DCR) or ConfigMap, see [Enable the ContainerLogV2 schema](../azure-monitor/containers/container-insights-logging-v2.md?tabs=configure-portal#enable-the-containerlogv2-schema-1).
+ContainerLogV2 is the recommended approach and is the default schema for customers onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy, and Azure portal. For more information about how to enable ContainerLogV2 through either the cluster's Data Collection Rule (DCR) or ConfigMap, see [Enable the ContainerLogV2 schema](../azure-monitor/containers/container-insights-logs-schema.md?tabs=configure-portal#enable-the-containerlogv2-schema).
## Visualization
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.- Previously updated : 12/20/2023+ Last updated : 01/08/2024 #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you eventually need to directly access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations.
-This article shows you how to create a connection to an AKS node and update the SSH key on an existing AKS cluster.
+You access a node through authentication, which methods vary depending on your Node OS and method of connection. You securely authenticate against AKS Linux and Windows nodes using SSH. Alternatively, for Windows Servers you can also connect to Windows Server nodes using the [remote desktop protocol (RDP)][aks-windows-rdp].
+
+For security reasons, AKS nodes aren't exposed to the internet. Instead, to connect directly to any AKS nodes, you need to use either `kubectl debug` or the host's private IP address.
+
+This guide shows you how to create a connection to an AKS node and update the SSH key of your AKS cluster.
## Before you begin
-This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows], to know more refer [Manage SSH configuration][manage-ssh-node-access]. Make sure you save the key pair in an OpenSSH format, other formats like .ppk aren't supported.
+To follow along the steps, you need to use Azure CLI that supports version 2.0.64 or later. Run `az --version` to check the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+Complete these steps if you don't have an SSH key. Create an SSH key depending on your Node OS Image, for [macOS and Linux][ssh-nix], or [Windows][ssh-windows]. Make sure you save the key pair in the OpenSSH format, avoid unsupported formats such as `.ppk`. Next, refer to [Manage SSH configuration][manage-ssh-node-access] to add the key to your cluster.
+
+## Linux and macOS
-You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+Linux and macOS users can SSH to access their node using `kubectl debug` or their private IP Address. Windows users should skip to the Windows Server Proxy section for a workaround to SSH via proxy.
-## Create an interactive shell connection to a Linux node using kubectl
+### SSH using kubectl debug
-To create an interactive shell connection to a Linux node, use the `kubectl debug` command to run a privileged container on your node.
+To create an interactive shell connection, use the `kubectl debug` command to run a privileged container on your node.
1. To list your nodes, use the `kubectl get nodes` command: ```bash kubectl get nodes -o wide ```
-
- The following example resembles output from the command:
-
+
+ Sample output:
+ ```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS
To create an interactive shell connection to a Linux node, use the `kubectl debu
aksnpwin000000 Ready agent 160m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter ```
-2. Use the `kubectl debug` command to run a container image on the node to connect to it. The following command starts a privileged container on your node and connects to it.
+2. Use the `kubectl debug` command to start a privileged container on your node and connect to it.
```bash kubectl debug node/aks-nodepool1-37663765-vmss000000 -it --image=mcr.microsoft.com/cbl-mariner/busybox:2.0 ```
- The following example resembles output from the command:
+ Sample output:
```output Creating debugging pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx with container debugger on node aks-nodepool1-37663765-vmss000000. If you don't see a command prompt, try pressing enter. root@aks-nodepool1-37663765-vmss000000:/# ```
-
- This privileged container gives access to the node.
-
+
+ You now have access to the node through a privileged container as a debugging pod.
+ > [!NOTE] > You can interact with the node session by running `chroot /host` from the privileged container.
-### Remove Linux node access
+### Exit kubectl debug mode
-When you're done with a debugging pod, enter the `exit` command to end the interactive shell session. After the interactive container session closes, delete the pod used for access with `kubectl delete pod`.
+When you're done with your node, enter the `exit` command to end the interactive shell session. After the interactive container session closes, delete the debugging pod used with `kubectl delete pod`.
```bash kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx ```
-## Create an interactive shell connection to a node using private IP
-If you don't have access to the Kubernetes API, you can get access to properties such as ```Node IP``` and ```Node Name``` through the AKS Agentpool Preview API(preview version 07-02-2023 or above) to troubleshoot node-specific issues in your AKS node pools. For convenience, we also expose the public IP if the node has a public IP assigned. However in order to SSH into the node, you need to be in the cluster's virtual network.
+## Private IP Method
-1. To get the private IP via CLI, use az cli version 2.53 or above with aks-preview extension installed.
+If you don't have access to the Kubernetes API, you can get access to properties such as ```Node IP``` and ```Node Name``` through the [AKS Agent Pool Preview API][agent-pool-rest-api] (preview version 07-02-2023 or above) to troubleshoot node-specific issues in your AKS node pools.
-```bash
- az aks machine list --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name nodepool1 -o table
-
- ```
-
-The following example resembles output from the command:
-
- ```output
- Name Ip
- --
-aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4;
-aks-nodepool1-33555069-vmss000001 10.224.0.6,family:IPv4;
-aks-nodepool1-33555069-vmss000002 10.224.0.4,family:IPv4;
-```
-To target a specific node inside the nodepool, use this command:
+### Create an interactive shell connection to a node using the IP address
-```bash
- az aks machine show --cluster-name myAKScluster --nodepool-name nodepool1 -g myResourceGroup --machine-name aks-nodepool1-33555069-vmss000000 -o table
-
- ```
- The following example resembles output from the command:
+For convenience, the nodepools are exposed when the node has a public IP assigned. However, you need to be in the cluster's virtual network to SSH into the node.
-```output
- Name Ip
- --
-aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4;
- ```
+1. To get the private IP, use the `machine list` to show all your VMs.
-2. Use the private IP to SSH into the node. [Azure Bastion][azure-bastion] also provides you with information for securely connecting to virtual machines via private IP address. Make sure that you configure an Azure Bastion host for the virtual network in which the VM resides.
+ ```bash
+ az aks machine list --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name nodepool1 -o table
+ ```
-```bash
-ssh azureuser@10.224.0.33
-```
+ Sample output:
-## Create the SSH connection to a Windows node
+ ```output
+ Name Ip
+ --
+ aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4;
+ aks-nodepool1-33555069-vmss000001 10.224.0.6,family:IPv4;
+ aks-nodepool1-33555069-vmss000002 10.224.0.4,family:IPv4;
+ ```
+ To target a specific node inside the nodepool, add a `--machine-name` flag:
+
+ ```bash
+ az aks machine show --cluster-name myAKScluster --nodepool-name nodepool1 -g myResourceGroup --machine-name aks-nodepool1-33555069-vmss000000 -o table
+ ```
+ Sample output:
-At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH or use SSH with 'machines API' presented at the start of this document.
+ ```output
+ Name Ip
+ --
+ aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4;
+ ```
-To connect to another node in the cluster, use the `kubectl debug` command. For more information, see the Linux section.
+2. SSH using your private IP address to access your node.
+
+ ```bash
+ ssh azureuser@10.224.0.33
+ ```
-To create the SSH connection to the Windows Server node from another node, use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
+3. Optionally, you can test with Azure Bastion. Follow these steps to set up [Azure Bastion][azure-bastion] to test your connection to your virtual machines using a private IP address. Make sure that the Azure Bastion is hosted in the same virtual network as your VM.
+
+## Windows Server proxy connection
+
+Follow these steps as a workaround to connect with SSH on a Windows Server node.
+
+### Create a proxy server
+
+At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster with `kubectl`, then connect to the Windows Server node from that node using SSH. Alternatively, you can connect to Windows Server nodes using [remote desktop protocol (RDP) connections][aks-windows-rdp].
+
+To connect to another node in the cluster, use the `kubectl debug` command. For more information, follow the above steps in the kubectl section. Create an SSH connection to the Windows Server node from another node, and use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
> [!IMPORTANT] >
-> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. AKS Update command can also be used to manage, create SSH keys on an existing AKS cluster. For more information refer [Manage SSH configuration][manage-ssh-node-access].
+> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. The AKS Update command can also be used to manage, create SSH keys on an existing AKS cluster. For more information, see [manage SSH node access][manage-ssh-node-access].
+
+Finish the prior steps to use kubectl debug, then return to this section, as you need to run the `kubectl debug` in your proxy.
1. Open a new terminal window and use the `kubectl get pods` command to get the name of the pod started by `kubectl debug`.
To create the SSH connection to the Windows Server node from another node, use t
kubectl get pods ```
- The following example resembles output from the command:
+ Sample output:
```output NAME READY STATUS RESTARTS AGE node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx 1/1 Running 0 21s ```
- In the previous example, *node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
+ In the sample output, *node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
2. Use the `kubectl port-forward` command to open a connection to the deployed pod:
To create the SSH connection to the Windows Server node from another node, use t
kubectl port-forward node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx 2022:22 ```
- The following example resembles output from the command:
+ Sample output:
```output Forwarding from 127.0.0.1:2022 -> 22
To create the SSH connection to the Windows Server node from another node, use t
kubectl get no -o custom-columns=NAME:metadata.name,'INTERNAL_IP:status.addresses[?(@.type == \"InternalIP\")].address' ```
- The following example resembles output from the command:
+ Sample output:
```output NAME INTERNAL_IP
To create the SSH connection to the Windows Server node from another node, use t
ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.224.0.62 ```
- The following example resembles output from the command:
+ Sample output:
```output The authenticity of host '10.224.0.62 (10.224.0.62)' can't be established. ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG. Are you sure you want to continue connecting (yes/no)? yes
-
- [...]
-
- Microsoft Windows [Version 10.0.17763.1935]
- (c) 2018 Microsoft Corporation. All rights reserved.
-
- azureuser@aksnpwin000000 C:\Users\azureuser>
``` > [!NOTE]
To create the SSH connection to the Windows Server node from another node, use t
## Next steps
-If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes control plane logs][view-control-plane-logs].
-See [Manage SSH configuration][manage-ssh-node-access] to learn about managing the SSH key on an AKS cluster or node pools.
+To learn about managing your SSH keys, see [Manage SSH configuration][manage-ssh-node-access].
<!-- INTERNAL LINKS --> [view-kubelet-logs]: kubelet-logs.md
-[view-master-logs]: monitor-aks-reference.md#resource-logs
+[view-control-plane-logs]: monitor-aks-reference.md#resource-logs
[install-azure-cli]: /cli/azure/install-azure-cli [aks-windows-rdp]: rdp.md [azure-bastion]: ../bastion/bastion-overview.md [ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
-[agentpool-rest-api]: /rest/api/aks/agent-pools/get#agentpool
+[agent-pool-rest-api]: /rest/api/aks/agent-pools/get#agentpool
[manage-ssh-node-access]: manage-ssh-node-access.md
aks Node Autoprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-autoprovision.md
Title: Node autoprovisioning (Preview) description: Learn about Azure Kubernetes Service (AKS) Node autoprovisioning + Last updated 10/19/2023 #Customer intent: As a cluster operator or developer, how to scale my cluster based on workload requirements and right size my nodes automatically
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice thereby minimizing any workload impact.
-AKS intiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade].
+AKS initiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade].
There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`:
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Note the following important changes before you upgrade to any of the available
| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2 | 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes |None | 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
-| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v2.0.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes|None
+| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes|None
## Alias minor version
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 08/04/2023 Last updated : 01/04/2024 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
The above example stores the value of the identity resource ID in *IDENTITY_RESO
### Assign permissions (decrypt and encrypt) to access key vault
+> [!NOTE]
+> When using a private key vault, AKS can't validate the permissions of the identity. Verify the identity has been granted permission to access the key vault before enabling KMS.
+ #### For non-RBAC key vault If your key vault is not enabled with `--enable-rbac-authorization`, you can use `az keyvault set-policy` to create an Azure key vault policy.
api-management Import Api From Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-odata.md
Previously updated : 06/06/2023 Last updated : 01/03/2024
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
tags: azure-resource-manager, bicep-+ Last updated 12/12/2023
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
description: Use this quickstart to create an Azure API Management instance usin
Last updated 12/12/2023-+ content_well_notification:
In this article, you learn how to:
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Import and publish your first API](import-and-publish.md)
+> [Tutorial: Import and publish your first API](import-and-publish.md)
app-service App Service Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-best-practices.md
Title: Best Practices
-description: Learn best practices and the common troubleshooting scenarios for your app running in Azure App Service.
+ Title: Best practices for Azure App Service
+description: Learn best practices and common troubleshooting scenarios for your app running in Azure App Service.
ms.assetid: f3359464-fa44-4f4a-9ea6-7821060e8d0d
-# Best Practices for Azure App Service
-This article summarizes best practices for using [Azure App Service](./overview.md).
+# Best practices for Azure App Service
+
+This article summarizes best practices for using [Azure App Service](./overview.md).
## <a name="colocation"></a>Colocation
-When Azure resources composing a solution such as a web app and a database are located in different regions, it can have the following effects:
+
+An Azure App Service solution consists of a web app and a database or storage account for holding content or data. When these resources are in different regions, the situation can have the following effects:
* Increased latency in communication between resources
-* Monetary charges for outbound data transfer cross-region as noted on the [Azure pricing page](https://azure.microsoft.com/pricing/details/data-transfers).
+* Monetary charges for outbound data transfer across regions, as noted on the [Azure pricing page](https://azure.microsoft.com/pricing/details/data-transfers)
+
+Colocation is best for Azure resources that compose a solution. When you create resources, make sure they're in the same Azure region unless you have specific business or design reasons for them not to be. You can move an App Service app to the same region as your database by using the [App Service cloning feature](app-service-web-app-cloning.md) available in Premium App Service plans.
+
+## <a name ="certificatepinning"></a>Certificate pinning
+
+Certificate pinning is a practice in which an application allows only a specific list of acceptable certificate authorities (CAs), public keys, thumbprints, or any part of the certificate hierarchy.
+
+Applications should never have a hard dependency or pin to the default wildcard (`*.azurewebsites.net`) TLS certificate. App Service is a platform as a service (PaaS), so this certificate could be rotated anytime. If the service rotates the default wildcard TLS certificate, certificate-pinned applications will break and disrupt the connectivity for applications that are hardcoded to a specific set of certificate attributes. The periodicity with which the certificate is rotated is also not guaranteed because the rotation frequency can change at any time.
+
+Applications that rely on certificate pinning also shouldn't have a hard dependency on an App Service managed certificate. App Service managed certificates could be rotated anytime, leading to similar problems for applications that rely on stable certificate properties. It's a best practice to provide a custom TLS certificate for applications that rely on certificate pinning.
+
+If your application needs to rely on certificate pinning behavior, we recommend that you add a custom domain to a web app and provide a custom TLS certificate for the domain. The application can then rely on the custom TLS certificate for certificate pinning.
+
+## <a name="memoryresources"></a>Memory resources
+
+When monitoring or service recommendations indicate that an app consumes more memory than you expected, consider the [App Service auto-healing feature](/azure/app-service/overview-diagnostics#auto-healing). You can configure auto-healing by using *web.config*.
-Colocation in the same region is best for Azure resources composing a solution such as a web app and a database or storage account used to hold content or data. When creating resources, make sure they are in the same Azure region unless you have specific business or design reason for them not to be. You can move an App Service app to the same region as your database by using the [App Service cloning feature](app-service-web-app-cloning.md) currently available for Premium App Service Plan apps.
+One of the options for the auto-healing feature is taking custom actions based on a memory threshold. Actions range from email notifications to investigation via memory dump to on-the-spot mitigation by recycling the worker process.
-## <a name ="certificatepinning"></a>Certificate Pinning
-Applications should never have a hard dependency or pin to the default \*.azurewebsites.net TLS certificate because the \*.azurewebsites.net TLS certificate could be rotated anytime given the nature of App Service as a Platform as a Service (PaaS). Certificate pinning is a practice where an application only allows a specific list of acceptable Certificate Authorities (CAs), public keys, thumbprints, or any part of the certificate hierarchy. In the event that the service rotates the App Service default wildcard TLS certificate, certificate pinned applications will break and disrupt the connectivity for applications that are hardcoded to a specific set of certificate attributes. The periodicity with which the \*.azurewebsites.net TLS certificate is rotated is also not guaranteed since the rotation frequency can change at any time.
+## <a name="CPUresources"></a>CPU resources
-Note that applications which rely on certificate pinning should also not have a hard dependency on an App Service Managed Certificate. App Service Managed Certificates could be rotated anytime, leading to similar problems for applications that rely on stable certificate properties. It is best practice to provide a custom TLS certificate for applications that rely on certificate pinning.
+When monitoring or service recommendations indicate that an app consumes more CPU than you expected or it experiences repeated CPU spikes, consider scaling up or scaling out the App Service plan. If your application is stateful, scaling up is the only option. If your application is stateless, scaling out gives you more flexibility and higher scale potential.
-If an application needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning.
+For more information about App Service scaling and autoscaling options, see [Scale up an app in Azure App Service](manage-scale-up.md).
-## <a name="memoryresources"></a>When apps consume more memory than expected
-When you notice an app consumes more memory than expected as indicated via monitoring or service recommendations, consider the [App Service Auto-Healing feature](https://azure.microsoft.com/blog/auto-healing-windows-azure-web-sites). One of the options for the Auto-Healing feature is taking custom actions based on a memory threshold. Actions span the spectrum from email notifications to investigation via memory dump to on-the-spot mitigation by recycling the worker process. Auto-healing can be configured via web.config and via a friendly user interface as described at in this blog post for the App Service Support Site Extension.
+## <a name="socketresources"></a>Socket resources
-## <a name="CPUresources"></a>When apps consume more CPU than expected
-When you notice an app consumes more CPU than expected or experiences repeated CPU spikes as indicated via monitoring or service recommendations, consider scaling up or scaling out the App Service plan. If your application is stateful, scaling up is the only option, while if your application is stateless, scaling out gives you more flexibility and higher scale potential.
+A common reason for exhausting outbound TCP connections is the use of client libraries that don't reuse TCP connections or that don't use a higher-level protocol such as HTTP keep-alive.
-For more information about App Service scaling and autoscaling options, see [Scale a Web App in Azure App Service](manage-scale-up.md).
+Review the documentation for each library that the apps in your App Service plan reference. Ensure that the libraries are configured or accessed in your code for efficient reuse of outbound connections. Also follow the library documentation guidance for proper creation and release or cleanup to avoid leaking connections. While such investigations into client libraries are in progress, you can mitigate impact by scaling out to multiple instances.
-## <a name="socketresources"></a>When socket resources are exhausted
-A common reason for exhausting outbound TCP connections is the use of client libraries, which are not implemented to reuse TCP connections, or when a higher-level protocol such as HTTP - Keep-Alive is not used. Review the documentation for each of the libraries referenced by the apps in your App Service Plan to ensure they are configured or accessed in your code for efficient reuse of outbound connections. Also follow the library documentation guidance for proper creation and release or cleanup to avoid leaking connections. While such client libraries investigations are in progress, impact may be mitigated by scaling out to multiple instances.
+### Node.js and outgoing HTTP requests
-### Node.js and outgoing http requests
-When working with Node.js and many outgoing http requests, dealing with HTTP - Keep-Alive is important. You can use the [agentkeepalive](https://www.npmjs.com/package/agentkeepalive) `npm` package to make it easier in your code.
+When you're working with Node.js and many outgoing HTTP requests, dealing with HTTP keep-alive is important. You can use the [agentkeepalive](https://www.npmjs.com/package/agentkeepalive) `npm` package to make it easier in your code.
-Always handle the `http` response, even if you do nothing in the handler. If you don't handle the response properly, your application gets stuck eventually because no more sockets are available.
+Always handle the `http` response, even if you do nothing in the handler. If you don't handle the response properly, your application eventually gets stuck because no more sockets are available.
-For example, when working with the `http` or `https` package:
+Here's an example of handling the response when you're working with the `http` or `https` package:
```javascript const request = https.request(options, function(response) {
const request = https.request(options, function(response) {
}); ```
-If you are running on App Service on Linux on a machine with multiple cores, another best practice is to use PM2 to start multiple Node.js processes to execute your application. You can do it by specifying a startup command to your container.
+If you're running your App Service app on a Linux machine that has multiple cores, another best practice is to use PM2 to start multiple Node.js processes to run your application. You can do it by specifying a startup command to your container.
-For example, to start four instances:
+For example, use this command to start four instances:
``` pm2 start /home/site/wwwroot/app.js --no-daemon -i 4 ```
-## <a name="appbackup"></a>When your app backup starts failing
-The two most common reasons why app backup fails are: invalid storage settings and invalid database configuration. These failures typically happen when there are changes to storage or database resources, or changes for how to access these resources (for example, credentials updated for the database selected in the backup settings). Backups typically run on a schedule and require access to storage (for outputting the backed-up files) and databases (for copying and reading contents to be included in the backup). The result of failing to access either of these resources would be consistent backup failure.
+## <a name="appbackup"></a>App backup
+
+Backups typically run on a schedule and require access to storage (for outputting the backed-up files) and databases (for copying and reading contents to be included in the backup). The result of failing to access either of these resources is consistent backup failure.
+
+The two most common reasons why app backup fails are invalid storage settings and invalid database configuration. These failures typically happen after changes to storage or database resources, or after changes to credentials for accessing those resources. For example, credentials might be updated for the database that you selected in the backup settings.
+
+When backup failures happen, review the most recent results to understand which type of failure is happening. For storage access failures, review and update the storage settings in your backup configuration. For database access failures, review and update your connection strings as part of app settings. Then proceed to update your backup configuration to properly include the required databases.
+
+For more information on app backups, see [Back up and restore your app in Azure App Service](manage-backup.md).
+
+## <a name="nodejs"></a>Node.js apps
+
+The Azure App Service default configuration for Node.js apps is intended to best suit the needs of most common apps. If you want to personalize the default configuration for your Node.js app to improve performance or optimize resource usage for CPU, memory, or network resources, see [Best practices and troubleshooting guide for Node applications on Azure App Service](app-service-web-nodejs-best-practices-and-troubleshoot-guide.md). That article describes the iisnode settings that you might need to configure for your Node.js app. It also explains how to address scenarios or problems with your app.
+
+## <a name="iotdevices"></a>IoT devices
+
+You can improve your environment when you're running Internet of Things (IoT) devices that are connected to App Service.
+
+One common practice with IoT devices is certificate pinning. To avoid any unforeseen downtime due to changes in the service's managed certificates, you should never pin certificates to the default `*.azurewebsites.net` certificate or to an App Service managed certificate. If your system needs to rely on certificate pinning behavior, we recommend that you add a custom domain to a web app and provide a custom TLS certificate for the domain. The application can then rely on the custom TLS certificate for certificate pinning. For more information, see the [certificate pinning](#certificatepinning) section of this article.
-When backup failures happen, review most recent results to understand which type of failure is happening. For storage access failures, review and update the storage settings used in the backup configuration. For database access failures, review and update your connections strings as part of app settings; then proceed to update your backup configuration to properly include the required databases. For more information on app backups, see [Back up a web app in Azure App Service](manage-backup.md).
+To increase resiliency in your environment, don't rely on a single endpoint for all your devices. Host your web apps in at least two regions to avoid a single point of failure, and be ready to fail over traffic.
-## <a name="nodejs"></a>When new Node.js apps are deployed to Azure App Service
-Azure App Service default configuration for Node.js apps is intended to best suit the needs of most common apps. If configuration for your Node.js app would benefit from personalized tuning to improve performance or optimize resource usage for CPU/memory/network resources, see [Best practices and troubleshooting guide for Node applications on Azure App Service](app-service-web-nodejs-best-practices-and-troubleshoot-guide.md). This article describes the iisnode settings you may need to configure for your Node.js app, describes the various scenarios or issues that your app may be facing, and shows how to address these issues.
+In App Service, you can add identical custom domains to multiple web apps, as long as these web apps are hosted in different regions. This capability ensures that if you need to pin certificates, you can also pin on the custom TLS certificate that you provided.
-## <a name=""></a>When Internet of Things (IoT) devices are connected to apps on App Service
-There are a few scenarios where you can improve your environment when running Internet of Things (IoT) devices that are connected to App Service. One very common practice with IoT devices is "certificate pinning". To avoid any unforeseen downtime due to changes in the service's managed certificates, you should never pin certificates to the default \*.azurewebsites.net certificate nor to an App Service Managed Certificate. If your system needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning. You can refer to the [certificate pinning](#certificatepinning) section of this article for more information.
+Another option is to use a load balancer in front of the web apps, such as Azure Front Door or Azure Traffic Manager, to ensure high availability for your web apps. For more information, see [Quickstart: Create a Front Door instance for a highly available global web application](../frontdoor/quickstart-create-front-door.md) or [Controlling Azure App Service traffic with Azure Traffic Manager](./web-sites-traffic-manager.md).
-To increase resiliency in your environment, you should not rely on a single endpoint for all your devices. You should at least host your web apps in two different regions to avoid a single point of failure and be ready to failover traffic. On App Service, you can add identical custom domain to different web apps as long as these web apps are hosted in different regions. This ensures that if you need to pin certificates, you can also pin on the custom TLS certificate that you provided. Another option would be to use a load balancer in front of the web apps, such as Azure Front Door or Traffic Manager, to ensure high availability for your web apps. You can refer to [Quickstart: Create a Front Door for a highly available global web application](../frontdoor/quickstart-create-front-door.md) or [Controlling Azure App Service traffic with Azure Traffic Manager](./web-sites-traffic-manager.md) for more information.
+## Next steps
-## Next Steps
-For more information on best practices, visit [App Service Diagnostics](./overview-diagnostics.md) to find out actionable best practices specific to your resource.
+To get actionable best practices that are specific to your resource, use [App Service diagnostics](./overview-diagnostics.md):
-- Navigate to your Web App in the [Azure portal](https://portal.azure.com).-- Click on **Diagnose and solve problems** in the left navigation, which opens App Service Diagnostics.-- Choose **Best Practices** homepage tile.-- Click **Best Practices for Availability & Performance** or **Best Practices for Optimal Configuration** to view the current state of your app in regards to these best practices.
+1. Go to your web app in the [Azure portal](https://portal.azure.com).
+1. Open App Service diagnostics by selecting **Diagnose and solve problems** on the left pane.
+1. Select the **Best Practices** tile.
+1. Select **Best Practices for Availability & Performance** or **Best Practices for Optimal Configuration** to view the current state of your app in regard to these best practices.
-You can also use this link to directly open App Service Diagnostics for your resource: `https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FParentAvailabilityAndPerformance#@microsoft.onmicrosoft.com/resource/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{siteName}/troubleshoot`.
+You can also use this link to directly open App Service diagnostics for your resource: `https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FParentAvailabilityAndPerformance#@microsoft.onmicrosoft.com/resource/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{siteName}/troubleshoot`.
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
tags: buy-ssl-certificates
Last updated 04/20/2023 -+
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
Last updated 12/23/2023 --+ # Language runtime support policy for App Service
If a supported Java runtime will be retired, Azure developers using the affected
Developers can download the Microsoft Build of OpenJDK for local development from [our download site](/java/openjdk/download). Product support for the [Microsoft Build of OpenJDK](/java/openjdk/download) is available through Microsoft when developing for Azure or [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) with a [qualified Azure support plan](https://azure.microsoft.com/support/plans/).-
app-service Operating System Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md
Title: Operating system functionality
-description: Learn about the OS functionality in Azure App Service on Windows. Find out what types of file, network, and registry access your app gets.
+ Title: Operating system functionality in Azure App Service
+description: Learn what types of file, network, and registry access your Windows app gets when it runs in Azure App Service.
ms.assetid: 39d5514f-0139-453a-b52e-4a1c06d8d914
Last updated 01/21/2022
-# Operating system functionality on Azure App Service
-This article describes the common baseline operating system functionality that is available to all Windows apps running on [Azure App Service](./overview.md). This functionality includes file, network, and registry access, and diagnostics logs and events.
+# Operating system functionality in Azure App Service
-> [!NOTE]
-> [Linux apps](overview.md#app-service-on-linux) in App Service run in their own containers. You have root access to the container but no access to the host operating system is allowed. Likewise, for [apps running in Windows containers](quickstart-custom-container.md?pivots=container-windows), you have administrative access to the container but no access to the host operating system.
->
+This article describes the baseline operating system functionality that's available to all Windows apps running in [Azure App Service](./overview.md). This functionality includes file, network, and registry access, along with diagnostics logs and events.
+
+> [!NOTE]
+> [Linux apps](overview.md#app-service-on-linux) in App Service run in their own containers. You have root access to the container but no access to the host operating system. Likewise, for [apps running in Windows containers](quickstart-custom-container.md?pivots=container-windows), you have administrative access to the container but no access to the host operating system.
<a id="tiers"></a> ## App Service plan tiers
-App Service runs customer apps in a multi-tenant hosting environment. Apps deployed in the **Free** and **Shared** tiers run in worker processes on shared virtual machines, while apps deployed in the **Standard** and **Premium** tiers run on virtual machine(s) dedicated specifically for the apps associated with a single customer.
+
+App Service runs customer apps in a multitenant hosting environment. Apps deployed in the Free and Shared tiers run in worker processes on shared virtual machines (VMs). Apps deployed in the Standard and Premium tiers run on VMs dedicated specifically for the apps associated with a single customer.
[!INCLUDE [app-service-dev-test-note](../../includes/app-service-dev-test-note.md)]
-Because App Service supports a seamless scaling experience between different tiers, the security configuration enforced for App Service apps remains the same. This ensures that apps don't suddenly behave differently, failing in unexpected ways, when an App Service plan switches from one tier to another.
+Because App Service supports a seamless scaling experience between tiers, the security configuration enforced for App Service apps remains the same. This configuration ensures that apps don't suddenly behave differently and fail in unexpected ways when an App Service plan switches from one tier to another.
<a id="developmentframeworks"></a> ## Development frameworks+ App Service pricing tiers control the amount of compute resources (CPU, disk storage, memory, and network egress) available to apps. However, the breadth of framework functionality available to apps remains the same regardless of the scaling tiers.
-App Service supports a variety of development frameworks, including ASP.NET, classic ASP, Node.js, PHP, and Python.
-In order to simplify and normalize security configuration, App Service apps typically run the various development frameworks with their default settings. The frameworks and runtime components provided by the platform are updated regularly to satisfy security and compliance requirements, for this reason we don't guarantee specific minor/patch versions and recommend customers target major version as needed.
+App Service supports various development frameworks, including ASP.NET, classic ASP, Node.js, PHP, and Python. To simplify and normalize security configuration, App Service apps typically run the development frameworks with their default settings. The frameworks and runtime components that the platform provides are updated regularly to satisfy security and compliance requirements. For this reason, we don't guarantee specific minor/patch versions. We recommend that customers target major versions as needed.
The following sections summarize the general kinds of operating system functionality available to App Service apps. <a id="FileAccess"></a> ## File access+ Various drives exist within App Service, including local drives and network drives. <a id="LocalDrives"></a> ### Local drives
-At its core, App Service is a service running on top of the Azure PaaS (platform as a service) infrastructure. As a result, the local drives that are "attached" to a virtual machine are the same drive types available to any worker role running in Azure. This includes:
-- An operating system drive (`%SystemDrive%`), whose size varies depending on the size of the VM.-- A resource drive (`%ResourceDrive%`) used by App Service internally.
+At its core, App Service is a service running on top of the Azure platform as a service (PaaS) infrastructure. As a result, the local drives that are associated with a virtual machine are the same drive types available to any worker role running in Azure. They include:
-A best practice is to always use the environment variables `%SystemDrive%` and `%ResourceDrive%` instead of hard-coded file paths. The root path returned from these two environment variables has shifted over time from `d:\` to `c:\`. However, older applications hard-coded with file path references to `d:\` will continue to work because the App Service platform automatically remaps `d:\` to instead point at `c:\`. As noted above, it's highly recommended to always use the environment variables when building file paths and avoid confusion over platform changes to the default root file path.
+- An operating system drive (`%SystemDrive%`) whose size depends on the size of the VM.
+- A resource drive (`%ResourceDrive%`) that App Service uses internally.
-It's important to monitor your disk utilization as your application grows. If the disk quota is reached, it can have adverse effects to your application. For example:
+A best practice is to always use the environment variables `%SystemDrive%` and `%ResourceDrive%` instead of hard-coded file paths. The root path returned from these two environment variables has shifted over time from `d:\` to `c:\`. However, older applications hard-coded with file path references to `d:\` continue to work because App Service automatically remaps `d:\` to point at `c:\`. As noted earlier, we highly recommend that you always use the environment variables when building file paths and avoid confusion over platform changes to the default root file path.
-- The app may throw an error indicating not enough space on the disk.-- You may see disk errors when browsing to the Kudu console.-- Deployment from Azure DevOps or Visual Studio may fail with `ERROR_NOT_ENOUGH_DISK_SPACE: Web deployment task failed. (Web Deploy detected insufficient space on disk)`.-- Your app may suffer slow performance.
+It's important to monitor your disk utilization as your application grows. Reaching the disk quota can have adverse effects on your application. For example:
+
+- The app might throw an error that indicates there's not enough space on the disk.
+- You might see disk errors when browsing to the Kudu console.
+- Deployment from Azure DevOps or Visual Studio might fail with `ERROR_NOT_ENOUGH_DISK_SPACE: Web deployment task failed. (Web Deploy detected insufficient space on disk)`.
+- Your app might have slow performance.
<a id="NetworkDrives"></a> ### Network drives (UNC shares)
-One of the unique aspects of App Service that makes app deployment and maintenance straightforward is that all content shares are stored on a set of UNC shares. This model maps well to the common pattern of content storage used by on-premises web hosting environments that have multiple load-balanced servers.
-Within App Service, there is a number of UNC shares created in each data center. A percentage of the user content for all customers in each data center is allocated to each UNC share. Each customer's subscription has a reserved directory structure on a specific UNC share within a data center. A customer may have multiple apps created within a specific data center, so all of the directories belonging to a single customer subscription are created on the same UNC share.
+One of the unique aspects of App Service that make app deployment and maintenance straightforward is that all content shares are stored on a set of UNC shares. This model maps well to the common pattern of content storage used by on-premises web hosting environments that have multiple load-balanced servers.
+
+Within App Service, UNC shares are created in each datacenter. A percentage of the user content for all customers in each datacenter is allocated to each UNC share. Each customer's subscription has a reserved directory structure on a specific UNC share in a datacenter. A customer might have multiple apps created in a specific datacenter, so all of the directories that belong to a single customer subscription are created on the same UNC share.
-Due to how Azure services work, the specific virtual machine responsible for hosting a UNC share will change over time. It is guaranteed that UNC shares will be mounted by different virtual machines as they're brought up and down during the normal course of Azure operations. For this reason, apps should never make hard-coded assumptions that the machine information in a UNC file path will remain stable over time. Instead, they should use the convenient *faux* absolute path `%HOME%\site` that App Service provides. This faux absolute path provides a portable, app-and-user-agnostic method for referring to one's own app. By using `%HOME%\site`, one can transfer shared files from app to app without having to configure a new absolute path for each transfer.
+Because of the way that Azure services work, the specific virtual machine responsible for hosting a UNC share changes over time. UNC shares are mounted by different virtual machines as they're brought up and down during the normal course of Azure operations. For this reason, apps should never make hard-coded assumptions that the machine information in a UNC file path will remain stable over time. Instead, they should use the convenient *faux* absolute path `%HOME%\site` that App Service provides.
+
+The faux absolute path is a portable method for referring to your own app. It's not specific to any app or user. By using `%HOME%\site`, you can transfer shared files from app to app without having to configure a new absolute path for each transfer.
<a id="TypesOfFileAccess"></a> ### Types of file access granted to an app
-The `%HOME%` directory in an app maps to a content share in Azure Storage dedicated for that app, and its size is defined by your [pricing tier](https://azure.microsoft.com/pricing/details/app-service/). It may include directories such as those for content, error and diagnostic logs, and earlier versions of the app created by source control. These directories are available to the app's application code at runtime for read and write access. Because the files aren't stored locally, they're persistent across app restarts.
-On the system drive, App Service reserves `%SystemDrive%\local` for app-specific temporary local storage. Changes to files in this directory are *not* persistent across app restarts. Although an app has full read/write access to its own temporary local storage, that storage really isn't intended to be used directly by the application code. Rather, the intent is to provide temporary file storage for IIS and web application frameworks. App Service also limits the amount of storage in `%SystemDrive%\local` for each app to prevent individual apps from consuming excessive amounts of local file storage. For **Free**, **Shared**, and **Consumption** (Azure Functions) tiers, the limit is 500 MB. See the following table for other tiers:
+The `%HOME%` directory in an app maps to a content share in Azure Storage dedicated for that app. Your [pricing tier](https://azure.microsoft.com/pricing/details/app-service/) defines its size. It might include directories such as those for content, error and diagnostic logs, and earlier versions of the app that source control created. These directories are available to the app's application code at runtime for read and write access. Because the files aren't stored locally, they're persistent across app restarts.
+
+On the system drive, App Service reserves `%SystemDrive%\local` for app-specific temporary local storage. Changes to files in this directory are *not* persistent across app restarts. Although an app has full read and write access to its own temporary local storage, that storage isn't intended for direct use by the application code. Rather, the intent is to provide temporary file storage for IIS and web application frameworks.
-| SKU | Local file storage |
+App Service limits the amount of storage in `%SystemDrive%\local` for each app to prevent individual apps from consuming excessive amounts of local file storage. For Free, Shared, and Consumption (Azure Functions) tiers, the limit is 500 MB. The following table lists other tiers:
+
+| Tier | Local file storage |
| - | - |
-| B1/S1/P1 | 11GB |
-| B2/S2/P2 | 15GB |
-| B3/S3/P3 | 58GB |
-| P0v3 | 11GB |
-| P1v2/P1v3/P1mv3/Isolated1/Isolated1v2 | 21GB |
-| P2v2/P2v3/P2mv3/Isolated2/Isolated2v2 | 61GB |
-| P3v2/P3v3/P3mv3/Isolated3/Isolated3v2 | 140GB |
-| Isolated4v2 | 276GB|
-| P4mv3 | 280GB |
-| Isolated5v2 | 552GB|
-| P5mv3 | 560GB |
-| Isolated6v2 | 1104GB|
-
-Two examples of how App Service uses temporary local storage are the directory for temporary ASP.NET files and the directory for IIS compressed files. The ASP.NET compilation system uses the `%SystemDrive%\local\Temporary ASP.NET Files` directory as a temporary compilation cache location. IIS uses the `%SystemDrive%\local\IIS Temporary Compressed Files` directory to store compressed response output. Both of these types of file usage (as well as others) are remapped in App Service to per-app temporary local storage. This remapping ensures that functionality continues as expected.
-
-Each app in App Service runs as a random unique low-privileged worker process identity called the "application pool identity", described further in the IIS [Application Pool Identities](/iis/manage/configuring-security/application-pool-identities) documentation. Application code uses this identity for basic read-only access to the operating system drive. This means application code can list common directory structures and read common files on operating system drive. Although this might appear to be a somewhat broad level of access, the same directories and files are accessible when you provision a worker role in an Azure hosted service and read the drive contents.
+| B1/S1/P1 | 11 GB |
+| B2/S2/P2 | 15 GB |
+| B3/S3/P3 | 58 GB |
+| P0v3 | 11 GB |
+| P1v2/P1v3/P1mv3/Isolated1/Isolated1v2 | 21 GB |
+| P2v2/P2v3/P2mv3/Isolated2/Isolated2v2 | 61 GB |
+| P3v2/P3v3/P3mv3/Isolated3/Isolated3v2 | 140 GB |
+| Isolated4v2 | 276 GB|
+| P4mv3 | 280 GB |
+| Isolated5v2 | 552 GB|
+| P5mv3 | 560 GB |
+| Isolated6v2 | 1,104 GB|
+
+Two examples of how App Service uses temporary local storage are the directory for temporary ASP.NET files and the directory for IIS compressed files. The ASP.NET compilation system uses the `%SystemDrive%\local\Temporary ASP.NET Files` directory as a temporary compilation cache location. IIS uses the `%SystemDrive%\local\IIS Temporary Compressed Files` directory to store compressed response output. Both of these types of file usage (along with others) are remapped in App Service to per-app temporary local storage. This remapping helps ensure that functionality continues as expected.
+
+Each app in App Service runs as a random, unique, low-privileged worker process identity called the [application pool identity](/iis/manage/configuring-security/application-pool-identities). Application code uses this identity for basic read-only access to the operating system drive. This access means that application code can list common directory structures and read common files on the operating system drive. Although this level of access might seem to be broad, the same directories and files are accessible when you provision a worker role in an Azure-hosted service and read the drive contents.
<a name="multipleinstances"></a> ### File access across multiple instances
-The content share (`%HOME%`) directory contains an app's content, and application code can write to it. If an app runs on multiple instances, the `%HOME%` directory is shared among all instances so that all instances see the same directory. So, for example, if an app saves uploaded files to the `%HOME%` directory, those files are immediately available to all instances.
-The temporary local storage (`%SystemDrive%\local`) directory is not shared between instances, neither is it shared between the app and its [Kudu app](resources-kudu.md).
+The content share (`%HOME%`) directory contains an app's content, and application code can write to it. If an app runs on multiple instances, the `%HOME%` directory is shared among all instances so that all instances see the same directory. For example, if an app saves uploaded files to the `%HOME%` directory, those files are immediately available to all instances.
+
+The temporary local storage (`%SystemDrive%\local`) directory is not shared between instances. It's also not shared between the app and its [Kudu app](resources-kudu.md).
<a id="NetworkAccess"></a> ## Network access
-Application code can use TCP/IP and UDP-based protocols to make outbound network connections to Internet accessible endpoints that expose external services. Apps can use these same protocols to connect to services within Azure, for example, by establishing HTTPS connections to SQL Database.
-There's also a limited capability for apps to establish one local loopback connection, and have an app listen on that local loopback socket. This feature exists primarily to enable apps that listen on local loopback sockets as part of their functionality. Each app sees a "private" loopback connection. App "A" cannot listen to a local loopback socket established by app "B".
+Application code can use TCP/IP and UDP-based protocols to make outbound network connections to internet-accessible endpoints that expose external services. Apps can use these same protocols to connect to services within Azure--for example, by establishing HTTPS connections to Azure SQL Database.
-Named pipes are also supported as an inter-process communication (IPC) mechanism between different processes that collectively run an app. For example, the IIS FastCGI module relies on named pipes to coordinate the individual processes that run PHP pages.
+There's also a limited capability for apps to establish one local loopback connection and have an app listen on that local loopback socket. This feature enables apps that listen on local loopback sockets as part of their functionality. Each app has a private loopback connection. One app can't listen to a local loopback socket that another app established.
+
+Named pipes are also supported as a mechanism for interprocess communication between processes that collectively run an app. For example, the IIS FastCGI module relies on named pipes to coordinate the individual processes that run PHP pages.
<a id="Code"></a> ## Code execution, processes, and memory
-As noted earlier, apps run inside of low-privileged worker processes using a random application pool identity. Application code has access to the memory space associated with the worker process, as well as any child processes that may be spawned by CGI processes or other applications. However, one app cannot access the memory or data of another app even if it's on the same virtual machine.
-Apps can run scripts or pages written with supported web development frameworks. App Service doesn't configure any web framework settings to more restricted modes. For example, ASP.NET apps running on App Service run in "full" trust as opposed to a more restricted trust mode. Web frameworks, including both classic ASP and ASP.NET, can call in-process COM components (but not out of process COM components) like ADO (ActiveX Data Objects) that are registered by default on the Windows operating system.
+As noted earlier, apps run inside low-privileged worker processes by using a random application pool identity. Application code has access to the memory space associated with the worker process, along with any child processes that CGI processes or other applications might spawn. However, one app can't access the memory or data of another app, even if it's on the same virtual machine.
+
+Apps can run scripts or pages written with supported web development frameworks. App Service doesn't configure any web framework settings to more restricted modes. For example, ASP.NET apps running in App Service run in full trust, as opposed to a more restricted trust mode. Web frameworks, including both classic ASP and ASP.NET, can call in-process COM components (like ActiveX Data Objects) that are registered by default on the Windows operating system. Web frameworks can't call out-of-process COM components.
-Apps can spawn and run arbitrary code. It's allowable for an app to do things like spawn a command shell or run a PowerShell script. However, even though arbitrary code and processes can be spawned from an app, executable programs and scripts are still restricted to the privileges granted to the parent application pool. For example, an app can spawn an executable that makes an outbound HTTP call, but that same executable cannot attempt to unbind the IP address of a virtual machine from its NIC. Making an outbound network call is allowed to low-privileged code, but attempting to reconfigure network settings on a virtual machine requires administrative privileges.
+An app can spawn and run arbitrary code, open a command shell, or run a PowerShell script. However, executable programs and scripts are still restricted to the privileges granted to the parent application pool. For example, an app can spawn an executable program that makes an outbound HTTP call, but that executable program can't try to unbind the IP address of a virtual machine from its network adapter. Making an outbound network call is allowed for low-privileged code, but trying to reconfigure network settings on a virtual machine requires administrative privileges.
<a id="Diagnostics"></a> ## Diagnostics logs and events
-Log information is another set of data that some apps attempt to access. The types of log information available to code running in App Service includes diagnostic and log information generated by an app that is also easily accessible to the app.
-For example, W3C HTTP logs generated by an active app are available either on a log directory in the network share location created for the app, or available in blob storage if a customer has set up W3C logging to storage. The latter option enables large quantities of logs to be gathered without the risk of exceeding the file storage limits associated with a network share.
+Log information is another set of data that some apps try to access. The types of log information available to code running in App Service include diagnostic and log information that an app generates and can easily access.
-In a similar vein, real-time diagnostics information from .NET apps can also be logged using the .NET tracing and diagnostics infrastructure, with options to write the trace information to either the app's network share, or alternatively to a blob storage location.
+For example, app-generated W3C HTTP logs are available either:
-Areas of diagnostics logging and tracing that aren't available to apps are Windows ETW events and common Windows event logs (for example, System, Application, and Security event logs). Since ETW trace information can potentially be viewable machine-wide (with the right ACLs), read and write access to ETW events are blocked. Developers might notice that API calls to read and write ETW events and common Windows event logs appear to work, but that is because App Service is "faking" the calls so that they appear to succeed. In reality, the application code has no access to this event data.
+- In a log directory in the network share location that you created for the app
+- In blob storage if you set up W3C logging to storage
+
+The latter option enables apps to gather large amounts of logs without exceeding the file storage limits associated with a network share.
+
+Similarly, real-time diagnostics information from .NET apps can be logged through the .NET tracing and diagnostics infrastructure. You can then write the trace information to either the app's network share or a blob storage location.
+
+Areas of diagnostics logging and tracing that aren't available to apps are Windows Event Tracing for Windows (ETW) events and common Windows event logs (for example, system, application, and security event logs). Because ETW trace information can potentially be viewable across a machine (with the right access control lists), read access and write access to ETW events are blocked. API calls to read and write ETW events and common Windows event logs might seem to work, but in reality, the application code has no access to this event data.
<a id="RegistryAccess"></a> ## Registry access
-Apps have read-only access to much (though not all) of the registry of the virtual machine they're running on. In practice, this means registry keys that allow read-only access to the local Users group are accessible by apps. One area of the registry that is currently not supported for either read or write access is the HKEY\_CURRENT\_USER hive.
-Write-access to the registry is blocked, including access to any per-user registry keys. From the app's perspective, write access to the registry should never be relied upon in the Azure environment since apps can (and do) get migrated across different virtual machines. The only persistent writeable storage that can be depended on by an app is the per-app content directory structure stored on the App Service UNC shares.
+Apps have read-only access to much (though not all) of the registry of the virtual machine that they're running on. This access means that apps can access registry keys that allow read-only access to the Local Users group. One area of the registry that's currently not supported for either read or write access is the `HKEY\_CURRENT\_USER` hive.
+
+Write access to the registry is blocked, including access to any per-user registry keys. From the app's perspective, it can't rely on write access to the registry in the Azure environment because apps can be migrated across virtual machines. The only persistent writeable storage that an app can depend on is the per-app content directory structure stored on the App Service UNC shares.
## Remote desktop access
App Service doesn't provide remote desktop access to the VM instances.
## More information
-[Azure App Service sandbox](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox) - The most up-to-date information about the execution environment of App Service. This page is
-maintained directly by the App Service development team.
+For the most up-to-date information about the execution environment of App Service, see the [Azure App Service sandbox](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox). The App Service development team maintains this page.
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md
Last updated 03/17/2021 + # Kudu service overview
-Kudu is the engine behind a number of features in [Azure App Service](overview.md) related to source control based deployment, and other deployment methods like Dropbox and OneDrive sync.
+Kudu is the engine behind some features in [Azure App Service](overview.md) that are related to source-control-based deployment and other deployment methods, like Dropbox and OneDrive sync.
## Access Kudu for your app
-Anytime you create an app, App Service creates a companion app for it that's secured by HTTPS. This Kudu app is accessible at:
-- App not in Isolated tier: `https://<app-name>.scm.azurewebsites.net`-- Internet-facing app in Isolated tier (App Service Environment): `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`-- Internal app in Isolated tier (ILB App Service Environment): `https://<app-name>.scm.<ase-name>.appserviceenvironment.net`
+Anytime you create an app, App Service creates a companion app for it that's secured by HTTPS. This Kudu app is accessible at these URLs:
+
+- App not in the Isolated tier: `https://<app-name>.scm.azurewebsites.net`
+- Internet-facing app in the Isolated tier (App Service Environment): `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`
+- Internal app in the Isolated tier (App Service Environment for internal load balancing): `https://<app-name>.scm.<ase-name>.appserviceenvironment.net`
-For more information, see [Accessing the kudu service](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service).
+For more information, see [Accessing the Kudu service](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service).
## Kudu features
Kudu gives you helpful information about your App Service app, such as:
- Server variables - HTTP headers
-It also provides other features, such as:
+It also provides features like these:
- Run commands in the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console). - Download IIS diagnostic dumps or Docker logs. - Manage IIS processes and site extensions. - Add deployment webhooks for Windows apps. - Allow ZIP deployment UI with `/ZipDeploy`.-- Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).-- Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
+- Generate [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
+- Allow access with a [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
## RBAC permissions required to access Kudu
-To access Kudu in the browser with Microsoft Entra authentication, you need to be a member of a built-in or custom role.
-- If using a built-in role, you must be a member of Website Contributor, Contributor, or Owner.-- If using a custom role, you need the resource provider operation: `Microsoft.Web/sites/publish/Action`.
+To access Kudu in the browser by using Microsoft Entra authentication, you need to be a member of a built-in or custom role.
+
+If you're using a built-in role, you must be a member of Website Contributor, Contributor, or Owner. If you're using a custom role, you need the resource provider operation: `Microsoft.Web/sites/publish/Action`.
-## More Resources
+## More resources
-Kudu is an [open source project](https://github.com/projectkudu/kudu), and has its documentation at [Kudu Wiki](https://github.com/projectkudu/kudu/wiki).
+Kudu is an [open-source project](https://github.com/projectkudu/kudu). It has documentation on the [Kudu wiki](https://github.com/projectkudu/kudu/wiki).
app-service Routine Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/routine-maintenance.md
Title: App Service routine maintenance
-description: Learn more about the routine, planned maintenance to keep the App Service platform up-to-date and secure.
+ Title: Routine maintenance for Azure App Service
+description: Learn more about routine, planned maintenance to help keep the App Service platform up to date and secure.
tags: app-service
Last updated 02/08/2023
-# Routine (planned) maintenance for App Service
-Routine maintenance covers behind the scenes updates to the Azure App Service platform. Types of maintenance can be performance improvements, bug fixes,
-new features, or security updates. App Service maintenance can be on App Service itself or the underlying operating system.
+# Routine (planned) maintenance for Azure App Service
->[!IMPORTANT]
->A breaking change or deprecation of functionality is not a part of routine maintenance (see [Modern Lifecycle Policy - Microsoft Lifecycle | Microsoft Learn](/lifecycle/policies/modern) for deprecation topic for details).
->
+Routine maintenance covers behind-the-scenes updates to Azure App Service. Types of maintenance can be performance improvements, bug fixes, new features, or security updates. App Service maintenance can be on the service itself or the underlying operating system.
-Our service quality and uptime guarantees continue to apply during maintenance periods. Maintenance periods are mentioned to help customers to get visibility into platform changes.
+> [!IMPORTANT]
+> A breaking change or deprecation of functionality is not a part of routine maintenance. For more information, see [Modern Lifecycle Policy](/lifecycle/policies/modern).
+
+Microsoft service quality and uptime guarantees continue to apply during maintenance periods. Notifications mention maintenance periods to help customers get visibility into platform changes.
## What to expect
-Like security updates on personal computers, mobile phones and other devices, even machines in the cloud need the latest updates. Unlike physical devices, cloud solutions like Azure App Service provide ways to overcome these routines with more ease. There's no need to "stop working" for a certain period and wait until patches are installed. Any workload can be shifted to different hardware in a matter of seconds and while updates are installed. The updates are made monthly, but can vary on the needs and other factors.
+Like personal computers, mobile phones, and other devices, machines in the cloud need the latest updates. Unlike physical devices, cloud solutions like Azure App Service provide ways to handle routine maintenance with more ease. There's no need to stop working and wait until patches are installed. Any workload can be shifted to different hardware in a matter of seconds and while updates are installed. The updates happen monthly but can vary, depending on your organization's needs and other factors.
+
+Because a typical cloud solution consists of multiple applications, databases, storage accounts, functions, and other resources, parts of your solutions can undergo maintenance at different times. Some of this coordination is related to geography, region, datacenters, and availability zones. It can also be due to the cloud, where not everything is touched simultaneously. For more information, see [Safe deployment practices](/devops/operate/safe-deployment-practices).
-Since a typical cloud solution consists of multiple applications, databases, storage accounts, functions, and other resources, various parts of your solutions can be undergoing maintenance at different times. Some of this coordination is related to geography, region, data centers, and availability zones. It can also be due to the cloud where not everything is touched simultaneously.
+The following screenshot shows an example of a maintenance event.
-[Safe deployment practices - Azure DevOps | Microsoft Learn](/devops/operate/safe-deployment-practices)
+In order from top to bottom, the example shows:
-In order from top to bottom we see:
-- A descriptive title of the maintenance event-- Impacted regions and subscriptions-- Expected maintenance window
+- A descriptive title of the maintenance event.
+- Affected regions and subscriptions.
+- The expected maintenance window.
-## Frequently Asked Questions
+## Frequently asked questions
### Why is the maintenance taking so long?
-The maintenance fundamentally represents delivering latest updates to the platform and service. It's difficult to predict when individual apps would be affected down to a specific time, so more generic notifications are sent out. The time ranges in those notifications don't reflect the experiences at the app level, but the overall operation across all resources. Apps which undergo maintenance instantly restart on freshly updated machines and continue working. There's no downtime when requests/traffic aren't served.
+Fundamentally, routine maintenance delivers the latest updates to the platform and service. It's hard to predict how the maintenance will affect individual apps down to a specific time, so notifications tend to be more general. The time ranges in notifications don't reflect the experiences at the app level, but rather the overall operation across all resources. Apps that undergo maintenance instantly restart on freshly updated machines and continue working. There's no downtime when requests and traffic aren't served.
### Why am I getting so many notifications?
-A typical scenario is that customers have multiple applications, and they are upgraded at different times. To avoid sending notifications for each of them, a more generic notification is sent that captures multiple resources. The notification is sent at the beginning and throughout the maintenance window. Due to the time window being longer, you can receive multiple reminders for the same rollout so you can easier correlate any restart/interruption/issue in case it is needed.
+A typical scenario is that customers have multiple applications that are upgraded at different times. To avoid sending notifications for each of them, we send one notification that captures multiple resources. We send the notification at the beginning and throughout the maintenance window. You might receive multiple reminders for the same rollout if the time window is long, so you can more easily correlate any restarts, interruptions, or other issues.
### How is routine maintenance related to SLA?
-Platform maintenance isn't expected to impact application uptime or availability. Applications continue to stay online while platform maintenance occurs. Platform maintenance may cause applications to be cold started on new virtual machines, which can lead to cold start delays. An application is still considered to be online, even while cold-starting. For best practices to minimize/avoid cold starts, consider using [local cache for Windows apps](overview-local-cache.md) as well as [Health check](monitor-instances-health-check.md). It's not expected that sites would incur any SLA violation during maintenance windows.
+Platform maintenance shouldn't affect application uptime or availability. Applications continue to stay online while platform maintenance occurs.
+
+Platform maintenance might cause applications to be cold started on new virtual machines, which can lead to delays. An application is still considered to be online while it's cold starting. To minimize or avoid cold starts, consider using [local cache for Windows apps](overview-local-cache.md) and [health check](monitor-instances-health-check.md).
+
+We don't expect sites to incur any service-level agreement (SLA) violations during the maintenance windows.
-### How does the upgrade work how does it ensure the smooth operation of my apps?
+### How does the upgrade ensure the smooth operation of my apps?
-Azure App Service represents a fleet of scale units, which provide hosting of web applications/solutions to the customers. Each scale unit is further divided into smaller pieces and sliced into a concept of upgrade domains and availability zones. This is to optimize placements of bigger App Service Plans and smooth deployments since not all machines in each scale unit are updated at once. Fleet upgrades machines iteratively while monitoring the health of the fleet so any time there is an issue, the system can stop the rollout. This process is described in detail at [Demystifying the magic behind App Service OS updates - Azure App Service](https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html).
+Azure App Service represents a fleet of scale units that provide hosting of web applications and solutions to customers. Each scale unit is divided into upgrade domains and availability zones. This division optimizes placements of bigger App Service plans and smooth deployments, because not all machines in each scale unit are updated at once.
+
+Maintenance operations upgrade machines iteratively while App Service monitors the health of the fleet. If there's a problem, the system can stop the rollout. For more information about this process, see the blog post [Demystifying the magic behind App Service OS updates](https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html).
### Are business hours reflected?
-Maintenance operations are optimized to start outside standard business hours (9-5pm) as statistically that is a better timing for any interruptions and restarts of workloads as there is a less stress on the system (in customer applications and transitively also on the platform itself). For App Service Plan and App Service Environment v2, maintenance can continue into business hours during longer maintenance events.
+Maintenance operations are optimized to start outside the standard business hours of 9 AM to 5 PM. Statistically, that's the best time for any interruptions and restarts of workloads because there's less stress on the system (in customer applications and transitively on the platform itself). For App Service plans and App Service Environment v2, maintenance can continue into business hours during longer maintenance events.
### What are my options to control routine maintenance?
-If you run your workloads in Isolated SKU via App Service Environment v3, you can also schedule the upgrades when needed. This is described with details at Control and automate planned maintenance for App Service Environment v3 - Azure App Service.
+If you run your workloads in an isolated product via App Service Environment v3, you can schedule the upgrades if necessary. For more information about this capability, see the blog post [Control and automate planned maintenance for App Service Environment v3](https://azure.github.io/AppService/2022/09/15/Configure-automation-for-upgrade-preferences-in-App-Service-Environment.html).
### Can I prepare my apps better for restarts?
-If your applications need extra time during restarts to come online (a typical pattern would be heavy dependency on external resources during application warm-up/start-up), consider using [Health Check](monitor-instances-health-check.md). You can use this to communicate with the platform that your application is not ready to receive requests yet and the system can use that information to route requests to other instances in your App Service Plan. For such case, it's recommended to have at least two instances in the plan.
-
-### My applications have been online, but since these notifications started showing up things are worse. What changed?
+If your applications need extra time during restarts to come online, consider using [health check](monitor-instances-health-check.md). A typical pattern for needing extra time is heavy dependency on external resources during application warmup or startup.
-Updates and maintenance events have been happening to the platform since its inception. The frequency of updates decreased over time, so the number of interruptions also decreased and uptime increases. However, there is an increased level of visibility into all changes which can cause the perception that more changes are being made.
+You can use health check to inform the platform that your application isn't ready to receive requests yet. The system can use that information to route requests to other instances in your App Service plan. For such cases, we recommend that you have at least two instances in the plan.
-## Next steps
+### My applications have been online, but things are worse since these notifications started showing up. What changed?
-[Control and automate planned maintenance for App Service Environment v3 - Azure App Service](https://azure.github.io/AppService/2022/09/15/Configure-automation-for-upgrade-preferences-in-App-Service-Environment.html)
+Updates and maintenance events have been happening to the platform since its inception. The frequency of updates decreased over time, so the number of interruptions also decreased and uptime increased. However, you now have more visibility into all changes. Increased visibility might cause the perception that more changes are happening.
-[Demystifying the magic behind App Service OS updates - Azure App Service](https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html)
+## Next steps
-[Routine Planned Maintenance Notifications for Azure App Service - Azure App Service](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html)
+Get more information about maintenance notifications by reading the blog post [Routine Planned Maintenance Notifications for Azure App Service](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html).
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
ms.devlang: java Last updated 11/30/2023-+ # Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
Last updated 08/22/2023 -+ # Troubleshoot backend health issues in Application Gateway
automation Automation Runbook Output And Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-output-and-messages.md
Last updated 08/28/2023 -+ # Configure runbook output and message streams
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
Last updated 11/21/2022 -+ #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.
azure-arc Extensions Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-troubleshooting.md
Title: "Troubleshoot extension issues for Azure Arc-enabled Kubernetes clusters" Last updated 12/19/2023 + description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes cluster extensions."
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
For the details on installation, refer to the [GitOps Connector](https://github.
| Variable | Value | | -- | -- |
-| AZ_ACR_NAME | (your Azure Container Registry instance, for example. azurearctest.azurecr.io) |
| AZURE_SUBSCRIPTION | (your Azure Service Connection, which should be **arc-demo-acr** from earlier in the tutorial) | | AZ_ACR_NAME | Azure ACR name, for example arc-demo-acr | | ENVIRONMENT_NAME | Dev |
azure-arc Administer Arc Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/administer-arc-scvmm.md
Last updated 12/04/2023 +
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
Last updated 11/30/2023 --+ # Customer intent: As an infrastructure admin, I want to cleanly remove my VMware vCenter environment from Azure Arc-enabled VMware vSphere.
azure-functions Durable Functions Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md
public static async Task<object> RunOrchestrator(
[FunctionName("CourseRecommendations")] public static async Task<object> Mapper(
- [ActivityTrigger] (string Major, int UniversityYear) inputs, FunctionContext executionContext)
+ [ActivityTrigger] (string Major, int UniversityYear) studentInfo, FunctionContext executionContext)
{ // retrieve and return course recommendations by major and university year return new
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
The examples refer to a `ToDoItem` class and a corresponding database table:
<a id="http-trigger-look-up-id-from-query-string-c-oop"></a> ### HTTP trigger, get row by ID from query string
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is [triggered by an HTTP request](./functions-bindings-http-webhook-trigger.md) that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
> [!NOTE] > The HTTP query string parameter is case-sensitive.
namespace AzureSQLSamples
<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a> ### HTTP trigger, get multiple rows from route parameter
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is [triggered by an HTTP request](./functions-bindings-http-webhook-trigger.md) that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
```cs using System.Collections.Generic;
The examples refer to a `ToDoItem` class and a corresponding database table:
<a id="http-trigger-look-up-id-from-query-string-c"></a> ### HTTP trigger, get row by ID from query string
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
> [!NOTE] > The HTTP query string parameter is case-sensitive.
namespace AzureSQLSamples
<a id="http-trigger-get-multiple-items-from-route-data-c"></a> ### HTTP trigger, get multiple rows from route parameter
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
```cs using System.Collections.Generic;
public class ToDoItem {
<a id="http-trigger-get-multiple-items-java"></a> ### HTTP trigger, get multiple rows
-The following example shows a SQL input binding in a Java function that reads from a query and returns the results in the HTTP response.
+The following example shows a SQL input binding in a Java function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
```java package com.function;
public class GetToDoItems {
<a id="http-trigger-look-up-id-from-query-string-java"></a> ### HTTP trigger, get row by ID from query string
-The following example shows a SQL input binding in a Java function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+The following example shows a SQL input binding in a Java function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
```java public class GetToDoItem {
public class GetToDoItem {
<a id="http-trigger-delete-one-or-multiple-rows-java"></a> ### HTTP trigger, delete rows
-The following example shows a SQL input binding in a Java function that executes a stored procedure with input from the HTTP request query parameter.
+The following example shows a SQL input binding in a Java function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
The examples refer to a database table:
<a id="http-trigger-get-multiple-items-javascript"></a> ### HTTP trigger, get multiple rows
-The following example shows a SQL input binding that reads from a query and returns the results in the HTTP response.
+The following example shows a SQL input binding that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
::: zone-end ::: zone pivot="programming-language-typescript"
module.exports = async function (context, req, todoItems) {
<a id="http-trigger-look-up-id-from-query-string-javascript"></a> ### HTTP trigger, get row by ID from query string
-The following example shows a SQL input binding that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+The following example shows a SQL input binding that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
::: zone-end ::: zone pivot="programming-language-typescript"
module.exports = async function (context, req, todoItem) {
<a id="http-trigger-delete-one-or-multiple-rows-javascript"></a> ### HTTP trigger, delete rows
-The following example shows a SQL input binding that executes a stored procedure with input from the HTTP request query parameter.
+The following example shows a SQL input binding that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
The examples refer to a database table:
<a id="http-trigger-get-multiple-items-powershell"></a> ### HTTP trigger, get multiple rows
-The following example shows a SQL input binding in a function.json file and a PowerShell function that reads from a query and returns the results in the HTTP response.
+The following example shows a SQL input binding in a function.json file and a PowerShell function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
The following is binding data in the function.json file:
Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
<a id="http-trigger-look-up-id-from-query-string-powershell"></a> ### HTTP trigger, get row by ID from query string
-The following example shows a SQL input binding in a PowerShell function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+The following example shows a SQL input binding in a PowerShell function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
The following is binding data in the function.json file:
Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
<a id="http-trigger-delete-one-or-multiple-rows-powershell"></a> ### HTTP trigger, delete rows
-The following example shows a SQL input binding in a function.json file and a PowerShell function that executes a stored procedure with input from the HTTP request query parameter.
+The following example shows a SQL input binding in a function.json file and a PowerShell function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
The examples refer to a database table:
<a id="http-trigger-get-multiple-items-python"></a> ### HTTP trigger, get multiple rows
-The following example shows a SQL input binding in a function.json file and a Python function that reads from a query and returns the results in the HTTP response.
+The following example shows a SQL input binding in a function.json file and a Python function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
The following is binding data in the function.json file:
def main(req: func.HttpRequest, todoItems: func.SqlRowList) -> func.HttpResponse
<a id="http-trigger-look-up-id-from-query-string-python"></a> ### HTTP trigger, get row by ID from query string
-The following example shows a SQL input binding in a Python function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+The following example shows a SQL input binding in a Python function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
The following is binding data in the function.json file:
def main(req: func.HttpRequest, todoItem: func.SqlRowList) -> func.HttpResponse:
<a id="http-trigger-delete-one-or-multiple-rows-python"></a> ### HTTP trigger, delete rows
-The following example shows a SQL input binding in a function.json file and a Python function that executes a stored procedure with input from the HTTP request query parameter.
+The following example shows a SQL input binding in a function.json file and a Python function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
If an exception occurs when a SQL input binding is executed then the function co
- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md) - [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
+- [Run a function from a HTTP request (trigger)](./functions-bindings-http-webhook-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
To use an identity-based connection for `AzureWebJobsStorage`, configure the fol
[Common properties for identity-based connections](#common-properties-for-identity-based-connections) may also be set as well.
-If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service are inferred for this account. This doesn't work when the storage account is in a sovereign cloud or has a custom DNS.
+If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.[blob|queue|file|table].core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service are inferred for this account. This doesn't work when the storage account is in a sovereign cloud or has a custom DNS.
| Setting | Description | Example value | |--|--||
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
The Azure Functions hosting plan you choose dictates the following behaviors:
* The resources available to each function app instance. * Support for advanced functionality, such as Azure Virtual Network connectivity.
-In addition to Azure Functions hosting, you can also host containerized function apps in containers can also be deployed to Kubernetes clusters and to Azure Container Apps. If you choose to host your functions in a Kubernetes cluster, consider using an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md). To learn more about deploying custom container apps, see [Azure Container Apps hosting of Azure Functions](./functions-container-apps-hosting.md).
+In addition to Azure Functions hosting, you can also host containerized function apps in containers that can be deployed to Kubernetes clusters or to Azure Container Apps. If you choose to host your functions in a Kubernetes cluster, consider using an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md). To learn more about deploying custom container apps, see [Azure Container Apps hosting of Azure Functions](./functions-container-apps-hosting.md).
This article provides a detailed comparison between the various hosting plans, including container-based hosting options.
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
$AppInfo = @{}
foreach ($App in $FunctionApps) {
- if ($App.ApplicationSettings["FUNCTIONS_WORKER_RUNTIME"] -eq 'dotnet')
+ if ($App.Runtime -eq 'dotnet')
{
- $AppInfo.Add($App.Name, $App.ApplicationSettings["FUNCTIONS_WORKER_RUNTIME"])
+ $AppInfo.Add($App.Name, $App.Runtime)
} }
azure-functions Streaming Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/streaming-logs.md
Title: Stream execution logs in Azure Functions
description: Learn how you can stream logs for functions in near real time. Last updated 8/21/2023 -+ ms.devlang: azurecli # Customer intent: As a developer, I want to be able to configure streaming logs so that I can see what's happening in my functions in near real time.
azure-functions Update Language Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/update-language-versions.md
Title: Update language versions in Azure Functions description: Learn how to update the version of the native language used by a function app in Azure Functions. -+ Last updated 12/06/2023 zone_pivot_groups: programming-languages-set-functions
azure-linux Quickstart Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-powershell.md
description: Learn how to quickly create an Azure Linux Container Host for an AK
+ Last updated 11/20/2023
azure-monitor Alerts Create Rule Cli Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md
description: This article shows you how to create a new alert rule using the CLI
+ Last updated 01/03/2024
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
- Title: Configure Container insights agent data collection | Microsoft Docs
-description: This article describes how you can configure the Container insights agent to control stdout/stderr and environment variables log collection.
- Previously updated : 11/14/2023---
-# Configure agent data collection for Container insights
-
-Container insights collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters from the containerized agent. You can configure agent data collection settings by creating a custom Kubernetes ConfigMap to control this experience.
-
-This article demonstrates how to create ConfigMaps and configure data collection based on your requirements.
-
-## ConfigMap file settings overview
-
-A template ConfigMap file is provided so that you can easily edit it with your customizations without having to create it from scratch. Before you start, review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/). Familiarize yourself with how to create, configure, and deploy ConfigMaps. You need to know how to filter stderr and stdout per namespace or across the entire cluster. You also need to know how to filter environment variables for any container running across all pods/nodes in the cluster.
-
->[!IMPORTANT]
->The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is **ciprod06142019** or later. To verify your agent version, on the **Node** tab, select a node. On the **Properties** pane, note the value of the **Agent Image Tag** property. For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
-
-### Data collection settings
-
-The following table describes the settings you can configure to control data collection.
-
->[!NOTE]
->For clusters enabling container insights using Azure CLI version 2.54.0 or greater, the default setting for `[log_collection_settings.schema]` will be set to "v2"
-
-| Key | Data type | Value | Description |
-|--|--|--|--|
-| `schema-version` | String (case sensitive) | v1 | This schema version is used by the agent<br> when parsing this ConfigMap.<br> Currently supported schema-version is v1.<br> Modifying this value isn't supported and will be<br> rejected when the ConfigMap is evaluated. |
-| `config-version` | String | | Supports the ability to keep track of this config file's version in your source control system/repository.<br> Maximum allowed characters are 10, and all other characters are truncated. |
-| `[log_collection_settings.stdout] enabled =` | Boolean | True or false | Controls if stdout container log collection is enabled. When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stdout.exclude_namespaces` setting), stdout logs will be collected from all containers across all pods/nodes in the cluster. If not specified in the ConfigMap,<br> the default value is `enabled = true`. |
-| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs won't be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in the ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system","gatekeeper-system"]`. |
-| `[log_collection_settings.stderr] enabled =` | Boolean | True or false | Controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in the ConfigMap, the default value is<br> `enabled = true`. |
-| `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs won't be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in the ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system","gatekeeper-system"]`. |
-| `[log_collection_settings.env_var] enabled =` | Boolean | True or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in the ConfigMap.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to `False` either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the `env:` section.<br> If collection of environment variables is globally disabled, you can't enable collection for a specific container. The only override that can be applied at the container level is to disable collection when it's already enabled globally. |
-| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | True or false | This setting controls container log enrichment to populate the `Name` and `Image` property values<br> for every log record written to the **ContainerLog** table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
-| `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | True or false | This setting allows the collection of Kube events of all types.<br> By default, the Kube events with type **Normal** aren't collected. When this setting is set to `true`, the **Normal** events are no longer filtered, and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
-| `[log_collection_settings.schema] enabled =` | String (case sensitive) | v2 or v1 [(retired)](./container-insights-v2-migration.md) | This setting sets the log ingestion format to ContainerLogV2 |
-| `[log_collection_settings.enable_multiline_logs] enabled =` | Boolean | True or False | This setting controls whether multiline container logs are enabled. They are disabled by default. See [Multi-line logging in Container Insights](./container-insights-logging-v2.md) to learn more. |
-
-### Metric collection settings
-
-The following table describes the settings you can configure to control metric collection.
-
-| Key | Data type | Value | Description |
-|--|--|--|--|
-| `[metric_collection_settings.collect_kube_system_pv_metrics] enabled =` | Boolean | True or false | This setting allows persistent volume (PV) usage metrics to be collected in the kube-system namespace. By default, usage metrics for persistent volumes with persistent volume claims in the kube-system namespace aren't collected. When this setting is set to `true`, PV usage metrics for all namespaces are collected. By default, this setting is set to `false`. |
-
-ConfigMap is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMap overruling the collections.
-
-### Agent settings for outbound proxy with Azure Monitor Private Link Scope (AMPLS)
-
-| Key | Data type | Value | Description |
-|--|--|--|--|
-| `[agent_settings.proxy_config] ignore_proxy_settings =` | Boolean | True or false | Set this value to true to ignore proxy settings. On both AKS & Arc K8s environments, if your cluster is configured with forward proxy, then proxy settings are automatically applied and used for the agent. For certain configurations, such as, with AMPLS + Proxy, you might with for the proxy config to be ignored. . By default, this setting is set to `false`. |
-
-## Configure and deploy ConfigMaps
-
-To configure and deploy your ConfigMap configuration file to your cluster:
-
-1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as *container-azm-ms-agentconfig.yaml*.
-
-1. Edit the ConfigMap YAML file with your customizations to collect stdout, stderr, and environmental variables:
-
- - To exclude specific namespaces for stdout log collection, configure the key/value by using the following example:
- `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
- - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally. Then follow the steps [here](container-insights-manage-agent.md#disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
- - To disable stderr log collection cluster-wide, configure the key/value by using the following example: `[log_collection_settings.stderr] enabled = false`.
-
- Save your changes in the editor.
-
-1. Create a ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`
-
-The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" created`.
--
-## Verify configuration
-
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
-
-```
-***************Start Config Processing********************
-config::unsupported/missing config schema version - 'v21' , using defaults
-```
-
-Errors related to applying configuration changes are also available for review. The following options are available to perform more troubleshooting of configuration changes:
--- From an agent pod log by using the same `kubectl logs` command.-- From live logs. Live logs show errors similar to the following example:-
- ```
- config::error::Exception while parsing config map for log collection/env variable settings: \nparse error on value \"$\" ($end), using defaults, please check config map for errors
- ```
--- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with error severity for configuration errors. If there are no errors, the entry in the table will have data with severity info, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.-
-After you correct the errors in the ConfigMap, save the YAML file and apply the updated ConfigMap by running the following command: `kubectl apply -f <configmap_yaml_file.yaml`.
-
-## Apply updated ConfigMap
-
-If you've already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then you can apply it by using the same command as before: `kubectl apply -f <configmap_yaml_file.yaml`.
-
-The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" updated`.
-
-## Verify schema version
-
-Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command: `kubectl describe pod ama-logs-fdf58 -n=kube-system`.
-
-Output similar to the following example appears with the annotation schema-versions:
-
-```
- Name: ama-logs-fdf58
- Namespace: kube-system
- Node: aks-agentpool-95673144-0/10.240.0.4
- Start Time: Mon, 10 Jun 2019 15:01:03 -0700
- Labels: controller-revision-hash=589cc7785d
- dsName=ama-logs-ds
- pod-template-generation=1
- Annotations: agentVersion=1.10.0.1
- dockerProviderVersion=5.0.0-0
- schema-versions=v1
-```
-
-## Frequently asked questions
-
-This section provides answers to common questions.
-
-### How do I enable log collection for containers in the kube-system namespace through Helm?
-
-The log collection from containers in the kube-system namespace is disabled by default. You can enable log collection by setting an environment variable on Azure Monitor Agent. See the [Container insights](https://aka.ms/azuremonitor-containers-helm-chart) GitHub page.
-
--
-## Next steps
--- Container insights doesn't include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.-- With monitoring enabled to collect health and resource utilization of your Azure Kubernetes Service or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.-- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
Use the workbooks, performance charts, and health status in Container insights t
This article helps you understand the two perspectives and how Azure Monitor helps you quickly assess, investigate, and resolve detected issues.
-The main differences in monitoring a Windows Server cluster with Container insights compared to a Linux cluster are described in [Features of Container insights](container-insights-overview.md#features-of-container-insights) in the overview article.
- ## Workbooks
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The Azure Monitor pricing model is primarily based on the amount of data ingeste
The following types of data collected from a Kubernetes cluster with Container insights influence cost and can be customized based on your usage: - Perf, Inventory, InsightsMetrics, and KubeEvents can be controlled through [cost optimization settings](../containers/container-insights-cost-config.md)-- Stdout and stderr container logs from every monitored container in every Kubernetes namespace in the cluster via the [agent ConfigMap](../containers/container-insights-agent-config.md)
+- Stdout and stderr container logs from every monitored container in every Kubernetes namespace in the cluster via the [agent ConfigMap](../containers/container-insights-data-collection-configmap.md)
- Container environment variables from every monitored container in the cluster - Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics
Otherwise, the majority of your data belongs to the ContainerLog table. and you
### Reducing your ContainerLog costs
-After you finish your analysis to determine which sources are generating the data that's exceeding your requirements, you can reconfigure data collection. For more information on configuring collection of stdout, stderr, and environmental variables, see [Configure agent data collection settings](container-insights-agent-config.md).
+After you finish your analysis to determine which sources are generating the data that's exceeding your requirements, you can reconfigure data collection. For more information on configuring collection of stdout, stderr, and environmental variables, see [Configure agent data collection settings](container-insights-data-collection-configmap.md).
The following examples show what changes you can apply to your cluster by modifying the ConfigMap file to help control cost.
After you apply one or more of these changes to your ConfigMaps, apply it to you
You can save on data ingestion costs on ContainerLog in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
-You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
+You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logs-schema.md).
### Prometheus metrics scraping > [!NOTE]
-> This section describes [collection of Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md). This information does not apply if you're using [Managed Prometheus to scrape your Prometheus metrics](prometheus-metrics-enable.md).
+> This section describes [collection of Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md). This information does not apply if you're using [Managed Prometheus to scrape your Prometheus metrics](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
If you [collect Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md), make sure that you limit the number of metrics you collect from your cluster:
azure-monitor Container Insights Data Collection Configmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-configmap.md
+
+ Title: Configure Container insights data collection using ConfigMap
+description: Describes how you can configure other data collection for Container insights using ConfigMap.
+ Last updated : 12/19/2023+++
+# Configure data collection in Container insights using ConfigMap
+
+This article describes how to configure data collection in Container insights using ConfigMap. [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) are a Kubernetes mechanism that allow you to store non-confidential data such as configuration file or environment variables.
+
+The ConfigMap is primarily used to configure data collection of the container logs and environment variables of the cluster. You can individually configure the stdout and stderr logs and also enable multiline logging.
+
+Specific configuration you can perform with the ConfigMap includes:
+
+- Enable/disable and namespace filtering for stdout and stderr logs
+- Enable/disable collection of environment variables for the cluster
+- Filter for Normal Kube events
+- Select log schema
+- Enable/disable multiline logging
+- Ignore proxy settings
+
+> [!NOTE]
+> See [Configure data collection in Container insights using data collection rule](./container-insights-data-collection-dcr.md) to configure data collection using a DCR which allows you to configure different settings.
+
+## Prerequisites
+- ConfigMap is a global list and there can be only one ConfigMap applied to the agent for Container insights. Applying another ConfigMap will overrule the previous ConfigMap collection settings.
+- The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is **ciprod06142019** or later. To verify your agent version, on the **Node** tab, select a node. On the **Properties** pane, note the value of the **Agent Image Tag** property. For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
+
+## Configure and deploy ConfigMap
+
+Use the following procedure to configure and deploy your ConfigMap configuration file to your cluster:
+
+1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and open it in an editor. If you already have a ConfigMap file, then you can use that one.
+1. Edit the ConfigMap YAML file with your customizations to collect stdout, stderr, and environmental variables:
+
+ - To exclude specific namespaces for stdout log collection, configure the key/value by using the following example:
+ `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
+ - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally. Then follow the steps [here](container-insights-manage-agent.md#disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
+ - To disable stderr log collection cluster-wide, configure the key/value by using the following example: `[log_collection_settings.stderr] enabled = false`.
+
+3. Save your changes in the editor.
+
+1. Create a ConfigMap by running the following kubectl command:
+
+ ```azurecli
+ kubectl apply -f <configmap_yaml_file.yaml>
+ ```
+
+ Example:
+
+ ```output
+ kubectl apply -f container-azm-ms-agentconfig.yaml
+ ```
++
+ The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, you'll receive a message similar to the following result:
+
+ ```output
+ configmap "container-azm-ms-agentconfig" created`.
+ ``````
++
+### Data collection settings
+
+The following table describes the settings you can configure to control data collection.
++
+| Setting | Data type | Value | Description |
+|:|:|:|:|
+| `schema-version` | String (case sensitive) | v1 | Used by the agent when parsing this ConfigMap. Currently supported schema-version is v1. Modifying this value isn't supported and will be rejected when the ConfigMap is evaluated. |
+| `config-version` | String | | Allows you to keep track of this config file's version in your source control system/repository. Maximum allowed characters are 10, and all other characters are truncated. |
+| **[log_collection_settings]** | | | |
+| `[stdout] enabled` | Boolean | true<br>false | Controls whether stdout container log collection is enabled. When set to `true` and no namespaces are excluded for stdout log collection, stdout logs will be collected from all containers across all pods and nodes in the cluster. If not specified in the ConfigMap, the default value is `true`. |
+| `[stdout] exclude_namespaces` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs won't be collected. This setting is effective only if `enabled` is set to `true`. If not specified in the ConfigMap, the default value is<br> `["kube-system","gatekeeper-system"]`. |
+| `[stderr] enabled` | Boolean | true<br>false | Controls whether stderr container log collection is enabled. When set to `true` and no namespaces are excluded for stderr log collection, stderr logs will be collected from all containers across all pods and nodes in the cluster. If not specified in the ConfigMap, the default value is `true`. |
+| `[stderr] exclude_namespaces` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs won't be collected. This setting is effective only if `enabled` is set to `true`. If not specified in the ConfigMap, the default value is<br> `["kube-system","gatekeeper-system"]`. |
+| `[env_var] enabled` | Boolean | true<br>false | This setting controls environment variable collection across all pods and nodes in the cluster. If not specified in the ConfigMap, the default value is `true`. If collection of environment variables is globally enabled, you can disable it for a specific container by setting the environment variable `AZMON_COLLECT_ENV` to `False` either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the `env:` section. If collection of environment variables is globally disabled, you can't enable collection for a specific container. The only override that can be applied at the container level is to disable collection when it's already enabled globally. |
+| `[enrich_container_logs] enabled` | Boolean | true<br>false | Controls container log enrichment to populate the `Name` and `Image` property values for every log record written to the **ContainerLogV2** or **ContainerLog** table for all container logs in the cluster. If not specified in the ConfigMap, the default value is `false`. |
+| `[collect_all_kube_events] enabled` | Boolean | true<br>false| Controls whether Kube events of all types are collected. By default, the Kube events with type **Normal** aren't collected. When this setting is `true`, the **Normal** events are no longer filtered, and all events are collected. If not specified in the ConfigMap, the default value is `false`. |
+| `[schema] containerlog_schema_version` | String (case sensitive) | v2<br>v1 | Sets the log ingestion format. If `v2`, the **ContainerLogV2** table is used. If `v1`, the **ContainerLog** table is used (this table has been deprecated). For clusters enabling container insights using Azure CLI version 2.54.0 or greater, the default setting is `v2`. See [Container insights log schema](./container-insights-logs-schema.md) for details. |
+| `[enable_multiline_logs] enabled` | Boolean | true<br>false | Controls whether multiline container logs are enabled. See [Multi-line logging in Container Insights](./container-insights-logs-schema.md#multi-line-logging-in-container-insights) for details. If not specified in the ConfigMap, the default value is `false`. This requires the `schema` setting to be `v2`. |
+| **[metric_collection_settings]** | | | |
+| `[collect_kube_system_pv_metrics] enabled` | Boolean | true<br>false | Allows persistent volume (PV) usage metrics to be collected in the kube-system namespace. By default, usage metrics for persistent volumes with persistent volume claims in the kube-system namespace aren't collected. When this setting is set to `true`, PV usage metrics for all namespaces are collected. If not specified in the ConfigMap, the default value is `false`. |
+| **[agent_settings]** | | | |
+| `[proxy_config] ignore_proxy_settings` | Boolean | true<br>false | When `true`, proxy settings are ignored. For both AKS and Arc-enabled Kubernetes environments, if your cluster is configured with forward proxy, then proxy settings are automatically applied and used for the agent. For certain configurations, such as with AMPLS + Proxy, you might want the proxy configuration to be ignored. If not specified in the ConfigMap, the default value is `false`. |
+++
+## Verify configuration
+
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod.
+
+```azurecli
+kubectl logs ama-logs-fdf58 -n kube-system
+```
+
+If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
+
+```output
+***************Start Config Processing********************
+config::unsupported/missing config schema version - 'v21' , using defaults
+```
+
+Errors related to applying configuration changes are also available for review. The following options are available to perform more troubleshooting of configuration changes:
+
+- From an agent pod log using the same `kubectl logs` command.
+- From live logs. Live logs show errors similar to the following example:
+
+ ```
+ config::error::Exception while parsing config map for log collection/env variable settings: \nparse error on value \"$\" ($end), using defaults, please check config map for errors
+ ```
+
+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with error severity for configuration errors. If there are no errors, the entry in the table will have data with severity info, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.
++
+## Verify schema version
+
+Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command.
+
+```bash
+kubectl describe pod ama-logs-fdf58 -n=kube-system.
+```
+
+Output similar to the following example appears with the annotation schema-versions:
+
+```output
+ Name: ama-logs-fdf58
+ Namespace: kube-system
+ Node: aks-agentpool-95673144-0/10.240.0.4
+ Start Time: Mon, 10 Jun 2019 15:01:03 -0700
+ Labels: controller-revision-hash=589cc7785d
+ dsName=ama-logs-ds
+ pod-template-generation=1
+ Annotations: agentVersion=1.10.0.1
+ dockerProviderVersion=5.0.0-0
+ schema-versions=v1
+```
+
+## Frequently asked questions
+
+### How do I enable log collection for containers in the kube-system namespace through Helm?
+
+The log collection from containers in the kube-system namespace is disabled by default. You can enable log collection by setting an environment variable on Azure Monitor Agent. See the [Container insights GitHub page](https://aka.ms/azuremonitor-containers-helm-chart).
+
+## Next steps
+
+- See [Configure data collection in Container insights using data collection rule](container-insights-data-collection-dcr.md) to configure data collection using DCR instead of ConfigMap.
+
azure-monitor Container Insights Data Collection Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-dcr.md
+
+ Title: Configure Container insights data collection using data collection rule
+description: Describes how you can configure cost optimization and other data collection for Container insights using a data collection rule.
+ Last updated : 12/19/2023+++
+# Configure data collection in Container insights using data collection rule
+
+This article describes how to configure data collection in Container insights using the [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for the cluster. A DCR is created when you onboard a cluster to Container insights. This DCR is used by the containerized agent to define data collection for the cluster.
+
+The DCR is primarily used to configure data collection of performance and inventory data and to configure cost optimization.
+
+Specific configuration you can perform with the DCR includes:
+
+- Enable/disable collection and namespace filtering for performance and inventory data
+- Define collection interval for performance and inventory data
+- Enable/disable collection of stdout and stderr logs
+- Enable/disable Syslog collection
+- Select log schema
+
+> [!NOTE]
+> See [Configure data collection in Container insights using ConfigMap](./container-insights-data-collection-configmap.md) to configure data collection using a DCR which allows you to configure different settings.
+
+## Prerequisites
+
+- AKS clusters must use either System or User Assigned Managed Identity. If cluster is using a Service Principal, you must [upgrade to Managed Identity](../../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
+++
+## Configure data collection
+The DCR that gets created when you enable Container insights is named *MSCI-\<cluster-region\>-\<cluster-name\>*. You can view it in the Azure portal by selecting the **Data Collection Rules** option in the **Monitor** menu in the Azure portal. Rather than directly modifying the DCR, you should use one of the methods described below to configure data collection. See [Data collection parameters](#data-collection-parameters) for details about the different available settings used by each method.
+
+> [!WARNING]
+> The default Container insights experience depends on all the existing data streams. Removing one or more of the default streams makes the Container insights experience unavailable, and you need to use other tools such as Grafana dashboards and log queries to analyze collected data.
+
+## [Azure portal](#tab/portal)
+You can use the Azure portal to enable cost optimization on your existing cluster after Container insights has been enabled, or you can enable Container insights on the cluster along with cost optimization.
+
+1. Select the cluster in the Azure portal.
+2. Select the **Insights** option in the **Monitoring** section of the menu.
+3. If Container insights has already been enabled on the cluster, select the **Monitoring Settings** button. If not, select **Configure Azure Monitor** and see [Enable monitoring on your Kubernetes cluster with Azure Monitor](container-insights-onboard.md) for details on enabling monitoring.
+
+ :::image type="content" source="media/container-insights-cost-config/monitor-settings-button.png" alt-text="Screenshot of AKS cluster with monitor settings button." lightbox="media/container-insights-cost-config/monitor-settings-button.png" :::
++
+4. For AKS and Arc-enabled Kubernetes, select **Use managed identity** if you haven't yet migrated the cluster to [managed identity authentication](../containers/container-insights-onboard.md#authentication).
+5. Select one of the cost presets described in [Cost presets](#cost-presets).
+
+ :::image type="content" source="media/container-insights-cost-config/cost-settings-onboarding.png" alt-text="Screenshot that shows the onboarding options." lightbox="media/container-insights-cost-config/cost-settings-onboarding.png" :::
+
+1. If you want to customize the settings, click **Edit collection settings**. See [Data collection parameters](#data-collection-parameters) for details on each setting. For **Collected data**, see [Collected data](#collected-data) below.
+
+ :::image type="content" source="media/container-insights-cost-config/advanced-collection-settings.png" alt-text="Screenshot that shows the collection settings options." lightbox="media/container-insights-cost-config/advanced-collection-settings.png" :::
+
+1. Click **Configure** to save the settings.
++
+### Cost presets
+When you use the Azure portal to configure cost optimization, you can select from the following preset configurations. You can select one of these or provide your own customized settings. By default, Container insights uses the *Standard* preset.
+
+| Cost preset | Collection frequency | Namespace filters | Syslog collection |
+| | | | |
+| Standard | 1 m | None | Not enabled |
+| Cost-optimized | 5 m | Excludes kube-system, gatekeeper-system, azure-arc | Not enabled |
+| Syslog | 1 m | None | Enabled by default |
+
+### Collected data
+The **Collected data** option allows you to select the tables that are populated for the cluster. This is the equivalent of the `streams` parameter when performing the configuration with CLI or ARM. If you select any option other than **All (Default)**, the Container insights experience becomes unavailable, and you must use Grafana or other methods to analyze collected data.
++
+| Grouping | Tables | Notes |
+| | | |
+| All (Default) | All standard container insights tables | Required for enabling the default Container insights visualizations |
+| Performance | Perf, InsightsMetrics | |
+| Logs and events | ContainerLog or ContainerLogV2, KubeEvents, KubePodInventory | Recommended if you have enabled managed Prometheus metrics |
+| Workloads, Deployments, and HPAs | InsightsMetrics, KubePodInventory, KubeEvents, ContainerInventory, ContainerNodeInventory, KubeNodeInventory, KubeServices | |
+| Persistent Volumes | InsightsMetrics, KubePVInventory | |
+++
+## [CLI](#tab/cli)
+
+> [!NOTE]
+> Minimum version required for Azure CLI is 2.51.0.
+ - For AKS clusters, [aks-preview](../../aks/cluster-configuration.md#install-the-aks-preview-azure-cli-extension) version 0.5.147 or higher
+ - For Arc enabled Kubernetes and AKS hybrid, [k8s-extension](../../azure-arc/kubernetes/extensions.md#prerequisites) version 1.4.3 or higher
+
+## AKS cluster
+
+When you use CLI to configure monitoring for your AKS cluster, you provide the configuration as a JSON file using the following format. Each of these settings is described in [Data collection parameters](#data-collection-parameters).
+
+```json
+{
+ "interval": "1m",
+ "namespaceFilteringMode": "Include",
+ "namespaces": ["kube-system"],
+ "enableContainerLogV2": true,
+ "streams": ["Microsoft-Perf", "Microsoft-ContainerLogV2"]
+}
+```
+
+### New AKS cluster
+
+Use the following command to create a new AKS cluster with monitoring enabled. This assumes a configuration file named **dataCollectionSettings.json**.
+
+```azcli
+az aks create -g <clusterResourceGroup> -n <clusterName> --enable-managed-identity --node-count 1 --enable-addons monitoring --data-collection-settings dataCollectionSettings.json --generate-ssh-keys
+```
+
+### Existing AKS Cluster
+
+**Cluster without the monitoring addon**
+Use the following command to add monitoring to an existing cluster without Container insights enabled. This assumes a configuration file named **dataCollectionSettings.json**.
+
+```azcli
+az aks enable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName> --data-collection-settings dataCollectionSettings.json
+```
+
+**Cluster with an existing monitoring addon**
+Use the following command to add a new configuration to an existing cluster with Container insights enabled. This assumes a configuration file named **dataCollectionSettings.json**.
+
+```azcli
+# get the configured log analytics workspace resource id
+az aks show -g <clusterResourceGroup> -n <clusterName> | grep -i "logAnalyticsWorkspaceResourceID"
+
+# disable monitoring
+az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName>
+
+# enable monitoring with data collection settings
+az aks enable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName> --workspace-resource-id <logAnalyticsWorkspaceResourceId> --data-collection-settings dataCollectionSettings.json
+```
+
+## Arc-enabled Kubernetes cluster
+Use the following command to add monitoring to an existing Arc-enabled Kubernetes cluster. See [Data collection parameters](#data-collection-parameters) for definitions of the available settings.
+
+```azcli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true dataCollectionSettings='{"interval":"1m","namespaceFilteringMode": "Include", "namespaces": [ "kube-system"],"enableContainerLogV2": true,"streams": ["<streams to be collected>"]}'
+```
+
+>[!NOTE]
+> When deploying on a Windows machine, the dataCollectionSettings field must be escaped. For example, dataCollectionSettings={\"interval\":\"1m\",\"namespaceFilteringMode\": \"Include\", \"namespaces\": [ \"kube-system\"]} instead of dataCollectionSettings='{"interval":"1m","namespaceFilteringMode": "Include", "namespaces": [ "kube-system"]}'
+
+## AKS hybrid Cluster
+Use the following command to add monitoring to an existing AKS hybrid cluster. See [Data collection parameters](#data-collection-parameters) for definitions of the available settings.
+
+```azcli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true dataCollectionSettings='{"interval":"1m","namespaceFilteringMode":"Include", "namespaces": ["kube-system"],"enableContainerLogV2": true,"streams": ["<streams to be collected>"]}'
+```
+
+>[!NOTE]
+> When deploying on a Windows machine, the dataCollectionSettings field must be escaped. For example, dataCollectionSettings={\"interval\":\"1m\",\"namespaceFilteringMode\": \"Include\", \"namespaces\": [ \"kube-system\"]} instead of dataCollectionSettings='{"interval":"1m","namespaceFilteringMode": "Include", "namespaces": [ "kube-system"]}'
++++
+## [ARM](#tab/arm)
++
+1. Download the Azure Resource Manager template and parameter files using the following commands. See below for the template and parameter files for each cluster configuration.
+
+ ```bash
+ curl -L <template file> -o existingClusterOnboarding.json
+ curl -L <parameter file> -o existingClusterParam.json
+ ```
+
+ **AKS cluster**
+ - Template: https://aka.ms/aks-enable-monitoring-costopt-onboarding-template-file
+ - Parameter: https://aka.ms/aks-enable-monitoring-costopt-onboarding-template-parameter-file
+
+ **Arc-enabled Kubernetes**
+ - Template: https://aka.ms/arc-k8s-enable-monitoring-costopt-onboarding-template-file
+ - Parameter: https://aka.ms/arc-k8s-enable-monitoring-costopt-onboarding-template-parameter-file
+
+ **AKS hybrid cluster**
+ - Template: https://aka.ms/existingClusterOnboarding.json
+ - Parameter: https://aka.ms/existingClusterParam.json
+
+1. Edit the values in the parameter file. See [Data collection parameters](#data-collection-parameters) for details on each setting. See below for settings unique to each cluster configuration.
+
+ **AKS cluster**<br>
+ - For _aksResourceId_ and _aksResourceLocation_, use the values on the **AKS Overview** page for the AKS cluster.
+
+ **Arc-enabled Kubernetes**
+ - For _clusterResourceId_ and _clusterResourceLocation_, use the values on the **Overview** page for the AKS hybrid cluster.
+
+ **AKS hybrid cluster**
+ - For _clusterResourceId_ and _clusterRegion_, use the values on the **Overview** page for the Arc enabled Kubernetes cluster.
+
++
+1. Deploy the ARM template with the following commands:
+
+ ```azcli
+ az login
+ az account set --subscription"Cluster Subscription Name"
+ az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
+ ```
++++++
+## Data collection parameters
+
+The following table describes the supported data collection settings and the name used for each for different onboarding options.
++
+| Name | Description |
+|:|:|
+| Collection frequency<br>CLI: `interval`<br>ARM: `dataCollectionInterval` | Determines how often the agent collects data. Valid values are 1m - 30m in 1m intervals The default value is 1m. If the value is outside the allowed range, then it defaults to *1 m*. |
+| Namespace filtering<br>CLI: `namespaceFilteringMode`<br>ARM: `namespaceFilteringModeForDataCollection` | *Include*: Collects only data from the values in the *namespaces* field.<br>*Exclude*: Collects data from all namespaces except for the values in the *namespaces* field.<br>*Off*: Ignores any *namespace* selections and collect data on all namespaces.
+| Namespace filtering<br>CLI: `namespaces`<br>ARM: `namespacesForDataCollection` | Array of comma separated Kubernetes namespaces to collect inventory and perf data based on the _namespaceFilteringMode_.<br>For example, *namespaces = \["kube-system", "default"]* with an _Include_ setting collects only these two namespaces. With an _Exclude_ setting, the agent collects data from all other namespaces except for _kube-system_ and _default_. With an _Off_ setting, the agent collects data from all namespaces including _kube-system_ and _default_. Invalid and unrecognized namespaces are ignored. |
+| Enable ContainerLogV2<br>CLI: `enableContainerLogV2`<br>ARM: `enableContainerLogV2` | Boolean flag to enable ContainerLogV2 schema. If set to true, the stdout/stderr Logs are ingested to [ContainerLogV2](container-insights-logs-schema.md) table. If not, the container logs are ingested to **ContainerLog** table, unless otherwise specified in the ConfigMap. When specifying the individual streams, you must include the corresponding table for ContainerLog or ContainerLogV2. |
+| Collected Data<br>CLI: `streams`<br>ARM: `streams` | An array of container insights table streams. See the supported streams above to table mapping. |
+
+## Applicable tables and metrics
+The settings for **collection frequency** and **namespace filtering** don't apply to all Container insights data. The following tables list the tables in the Log Analytics workspace used by Container insights and the metrics it collects along with the settings that apply to each.
+
+>[!NOTE]
+>This feature configures settings for all container insights tables except for ContainerLog and ContainerLogV2. To configure settings for these tables, update the ConfigMap described in [agent data collection settings](../containers/container-insights-data-collection-configmap.md).
++
+| Table name | Interval? | Namespaces? | Remarks |
+|:|::|::|:|
+| ContainerInventory | Yes | Yes | |
+| ContainerNodeInventory | Yes | No | Data collection setting for namespaces isn't applicable since Kubernetes Node isn't a namespace scoped resource |
+| KubeNodeInventory | Yes | No | Data collection setting for namespaces isn't applicable Kubernetes Node isn't a namespace scoped resource |
+| KubePodInventory | Yes | Yes ||
+| KubePVInventory | Yes | Yes | |
+| KubeServices | Yes | Yes | |
+| KubeEvents | No | Yes | Data collection setting for interval isn't applicable for the Kubernetes Events |
+| Perf | Yes | Yes | Data collection setting for namespaces isn't applicable for the Kubernetes Node related metrics since the Kubernetes Node isn't a namespace scoped object. |
+| InsightsMetrics| Yes | Yes | Data collection settings are only applicable for the metrics collecting the following namespaces: container.azm.ms/kubestate, container.azm.ms/pv and container.azm.ms/gpu |
++
+| Metric namespace | Interval? | Namespaces? | Remarks |
+|:|::|::|:|
+| Insights.container/nodes| Yes | No | Node isn't a namespace scoped resource |
+|Insights.container/pods | Yes | Yes| |
+| Insights.container/containers | Yes | Yes | |
+| Insights.container/persistentvolumes | Yes | Yes | |
+++
+## Stream values
+When you specify the tables to collect using CLI or ARM, you specify a stream name that corresponds to a particular table in the Log Analytics workspace. The following table lists the stream name for each table.
+
+> [!NOTE]
+> If your familiar with the [structure of a data collection rule](../essentials/data-collection-rule-structure.md), the stream names in this table are specified in the [dataFlows](../essentials/data-collection-rule-structure.md#dataflows) section of the DCR.
+
+| Stream | Container insights table |
+| | |
+| Microsoft-ContainerInventory | ContainerInventory |
+| Microsoft-ContainerLog | ContainerLog |
+| Microsoft-ContainerLogV2 | ContainerLogV2 |
+| Microsoft-ContainerNodeInventory | ContainerNodeInventory |
+| Microsoft-InsightsMetrics | InsightsMetrics |
+| Microsoft-KubeEvents | KubeEvents |
+| Microsoft-KubeMonAgentEvents | KubeMonAgentEvents |
+| Microsoft-KubeNodeInventory | KubeNodeInventory |
+| Microsoft-KubePodInventory | KubePodInventory |
+| Microsoft-KubePVInventory | KubePVInventory |
+| Microsoft-KubeServices | KubeServices |
+| Microsoft-Perf | Perf |
++
+## Impact on visualizations and alerts
+
+If you're currently using the above tables for other custom alerts or charts, then modifying your data collection settings might degrade those experiences. If you're excluding namespaces or reducing data collection frequency, review your existing alerts, dashboards, and workbooks using this data.
+
+To scan for alerts that reference these tables, run the following Azure Resource Graph query:
+
+```Kusto
+resources
+| where type in~ ('microsoft.insights/scheduledqueryrules') and ['kind'] !in~ ('LogToMetric')
+| extend severity = strcat("Sev", properties["severity"])
+| extend enabled = tobool(properties["enabled"])
+| where enabled in~ ('true')
+| where tolower(properties["targetResourceTypes"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["targetResourceType"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["scopes"]) matches regex 'providers/microsoft.operationalinsights/workspaces($|/.*)?'
+| where properties contains "Perf" or properties contains "InsightsMetrics" or properties contains "ContainerInventory" or properties contains "ContainerNodeInventory" or properties contains "KubeNodeInventory" or properties contains"KubePodInventory" or properties contains "KubePVInventory" or properties contains "KubeServices" or properties contains "KubeEvents"
+| project id,name,type,properties,enabled,severity,subscriptionId
+| order by tolower(name) asc
+```
+++
+## Next steps
+
+- See [Configure data collection in Container insights using ConfigMap](container-insights-data-collection-configmap.md) to configure data collection using ConfigMap instead of the DCR.
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
- Title: Enable the AKS Monitoring Add-on by using Azure Policy
-description: This article describes how to enable the AKS Monitoring Add-on by using a custom Azure policy.
-- Previously updated : 08/29/2022---
-# Enable the AKS Monitoring Add-on by using Azure Policy
-This article describes how to enable the Azure Kubernetes Service (AKS) Monitoring Add-on by using a custom Azure policy.
-
-## Permissions required
-The AKS Monitoring Add-on requires the following roles on the managed identity used by Azure Policy:
--
-The AKS Monitoring Add-on custom policy can be assigned at either the subscription or resource group scope. If the Log Analytics workspace and AKS cluster are in different subscriptions, the managed identity used by the policy assignment must have the required role permissions on both the subscriptions or on the Log Analytics workspace resource. Similarly, if the policy is scoped to the resource group, the managed identity should have the required role permissions on the Log Analytics workspace if the workspace isn't in the selected resource group scope.
-
-## Create and assign a policy definition by using the Azure portal
-
-Use the Azure portal to create and assign a policy definition.
-
-### Create a policy definition
-
-1. Download the Azure custom policy definition to enable the AKS Monitoring Add-on.
-
- ``` sh
- curl -o azurepolicy.json -L https://aka.ms/aks-enable-monitoring-custom-policy
- ```
-
-1. Go to the [Azure Policy Definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyMenuBlade/Definitions) page. Create a policy definition with the following details on the **Policy definition** page:
-
- - **Definition location**: Select the Azure subscription where the policy definition should be stored.
- - **Name**: *(Preview)AKS-Monitoring-Addon*
- - **Description**: *Azure custom policy to enable the Monitoring Add-on onto Azure Kubernetes clusters in a specified scope*
- - **Category**: Select **Use existing** and select **Kubernetes** from the dropdown list.
- - **Policy rule**: Remove the existing sample rules and copy the contents of `azurepolicy.json` downloaded in step 1.
-
-### Assign a policy definition to a specified scope
-
-> [!NOTE]
-> A managed identity will be created automatically and assigned specified roles in the policy definition.
-
-1. Select the policy definition **(Preview) AKS Monitoring Addon** that you created.
-1. Select **Assign** and specify a **Scope** of where the policy should be assigned.
-1. Select **Next** and provide the resource ID of the Log Analytics workspace. The resource ID should be in the format `/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroup>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>`.
-1. Create a remediation task if you want to apply the policy to existing AKS clusters in the selected scope.
-1. Select **Review + create** to create the policy assignment.
-
-## Create and assign a policy definition by using the Azure CLI
-
-Use the Azure CLI to create and assign a policy definition.
-
-### Create a policy definition
-
-1. Download the Azure custom policy definition rules and parameter files with the following commands:
-
- ``` sh
- curl -o azurepolicy.rules.json -L https://aka.ms/aks-enable-monitoring-custom-policy-rules
- curl -o azurepolicy.parameters.json -L https://aka.ms/aks-enable-monitoring-custom-policy-parameters
- ```
-
-1. Create the policy definition with the following command:
-
- ```azurecli
- az cloud set -n <AzureCloud | AzureChinaCloud | AzureUSGovernment> # set the Azure cloud
- az login # login to cloud environment
- az account set -s <subscriptionId>
- az policy definition create --name "(Preview)AKS-Monitoring-Addon" --display-name "(Preview)AKS-Monitoring-Addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules azurepolicy.rules.json --params azurepolicy.parameters.json
- ```
-
-### Assign a policy definition to a specified scope
-
-Create the policy assignment with the following command:
-
-```azurecli
-az policy assignment create --name aks-monitoring-addon --policy "(Preview)AKS-Monitoring-Addon" --assign-identity --identity-scope /subscriptions/<subscriptionId> --role Contributor --scope /subscriptions/<subscriptionId> --location <locatio> --role Contributor --scope /subscriptions/<subscriptionId> -p "{ \"workspaceResourceId\": { \"value\": \"/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>\" } }"
-```
-
-## Next steps
--- Learn more about [Azure Policy](../../governance/policy/overview.md).-- Learn how [remediation access control works](../../governance/policy/how-to/remediate-resources.md#how-remediation-access-control-works).-- Learn more about [Container insights](./container-insights-overview.md).-- Install the [Azure CLI](/cli/azure/install-azure-cli).
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
- Title: Enable Container insights for Azure Kubernetes Service (AKS) cluster
-description: Learn how to enable Container insights on an Azure Kubernetes Service (AKS) cluster.
- Previously updated : 11/14/2023----
-# Enable Container insights for Azure Kubernetes Service (AKS) cluster
-
-This article describes how to enable Container insights on a managed Kubernetes cluster hosted on an [Azure Kubernetes Service (AKS)](../../aks/index.yml) cluster.
-
-## Prerequisites
--- See [Prerequisites](./container-insights-onboard.md) for Container insights.-- You can attach an AKS cluster to a Log Analytics workspace in a different Azure subscription in the same Microsoft Entra tenant, but you must use the Azure CLI or an Azure Resource Manager template. You can't currently perform this configuration with the Azure portal.-- If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the *Microsoft.ContainerService* resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).--
-## Enable monitoring
-
-### [Azure portal](#tab/azure-portal)
-
-There are multiple options to enable Prometheus metrics on your cluster from the Azure portal.
--
-### New cluster
-When you create a new AKS cluster in the Azure portal, you can enable Prometheus, Container insights, and Grafana from the **Integrations** tab. In the Azure Monitor section, select either **Default configuration** or **Custom configuration** if you want to specify which workspaces to use. You can perform additional configuration once the cluster is created.
--
-### From existing cluster
-
-This option enables Container insights on a cluster and gives you the option of also enabling [Managed Prometheus and Managed Grafana](./prometheus-metrics-enable.md) for the cluster.
-
-> [!NOTE]
-> If you want to enabled Managed Prometheus without Container insights, then [enable it from the Azure Monitor workspace](./prometheus-metrics-enable.md).
-
-1. Open the cluster's menu in the Azure portal and select **Insights**.
- 1. If Container insights isn't enabled for the cluster, then you're presented with a screen identifying which of the features have been enabled. Click **Configure monitoring**.
-
- :::image type="content" source="media/aks-onboard/configure-monitoring-screen.png" lightbox="media/aks-onboard/configure-monitoring-screen.png" alt-text="Screenshot that shows the configuration screen for a cluster.":::
-
- 2. If Container insights has already been enabled on the cluster, select the **Monitoring Settings** button to modify the configuration.
-
- :::image type="content" source="media/aks-onboard/monitor-settings-button.png" lightbox="media/aks-onboard/monitor-settings-button.png" alt-text="Screenshot that shows the monitoring settings button for a cluster.":::
-
-2. The **Container insights** will be enabled. **Select** the checkboxes for **Enable Prometheus metrics** and **Enable Grafana** if you also want to enable them for the cluster. If you have existing Azure Monitor workspace and Grafana workspace, then they're selected for you.
-
- :::image type="content" source="media/prometheus-metrics-enable/configure-container-insights.png" lightbox="media/prometheus-metrics-enable/configure-container-insights.png" alt-text="Screenshot that shows the dialog box to configure Container insights with Prometheus and Grafana.":::
-
-3. Click **Advanced settings** to select alternate workspaces or create new ones. The **Cost presets** setting allows you to modify the default collection details to reduce your monitoring costs. See [Enable cost optimization settings in Container insights](./container-insights-cost-config.md) for details.
-
- :::image type="content" source="media/aks-onboard/advanced-settings.png" lightbox="media/aks-onboard/advanced-settings.png" alt-text="Screenshot that shows the advanced settings dialog box.":::
-
-4. Click **Configure** to save the configuration.
-
-### From Container insights
-From the Container insights menu, you can view all of your clusters, quickly identify which aren't monitored, and launch the same configuration experience as described in [From existing cluster](#from-existing-cluster).
-
-1. Open the **Monitor** menu in the Azure portal and select **Insights**.
-3. The **Unmonitored clusters** tab lists clusters that don't have Container insights enabled. Click **Enable** next to a cluster and follow the guidance in [Existing cluster](#existing-cluster).
--
-## [CLI](#tab/azure-cli)
-
-> [!NOTE]
-> Managed identity authentication will be default in CLI version 2.49.0 or higher. If you need to use legacy/non-managed identity authentication, use CLI version < 2.49.0. For CLI version 2.54.0 or higher the logging schema will be configured to [ContainerLogV2](container-insights-logging-v2.md) via the ConfigMap
-
-### Use a default Log Analytics workspace
-
-Use the following command to enable monitoring of your AKS cluster by using a default Log Analytics workspace for the resource group. If a default workspace doesn't already exist in the cluster's region, one will be created with a name in the format *DefaultWorkspace-\<GUID>-\<Region>*.
-
-```azurecli
-az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name>
-```
-
-The output will resemble the following example:
-
-```output
-provisioningState : Succeeded
-```
-
-### Specify a Log Analytics workspace
-
-Use the following command to enable monitoring of your AKS cluster on a specific Log Analytics workspace. The resource ID of the workspace will be in the form `"/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>"`.
-
-```azurecli
-az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id>
-```
-
-**Example**
-
-```azurecli
-az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"
-```
--
-## [Resource Manager template](#tab/arm)
-
->[!NOTE]
->The template must be deployed in the same resource group as the cluster.
-
-1. Download the template and parameter file.
- - Template file: [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file)
- - Parameter file: [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file)
-
-2. Edit the following values in the parameter file:
-
- | Parameter | Description |
- |:|:|
- | `aksResourceId` | Use the values on the **AKS Overview** page for the AKS cluster. |
- | `aksResourceLocation` | Use the values on the **AKS Overview** page for the AKS cluster. |
- | `workspaceResourceId` | Use the resource ID of your Log Analytics workspace. |
- | `resourceTagValues` | Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values. |
-
-3. Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
-
-## [Bicep](#tab/bicep)
-
-### Existing cluster
-
-1. Download Bicep templates and parameter files depending on whether you want to enable Syslog collection.
-
- **Syslog**
- - Template file: [Template with Syslog](https://aka.ms/enable-monitoring-msi-syslog-bicep-template)
- - Parameter file: [Parameter with Syslog](https://aka.ms/enable-monitoring-msi-syslog-bicep-parameters)
-
- **No Syslog**
- - Template file: [Template without Syslog](https://aka.ms/enable-monitoring-msi-bicep-template)
- - Parameter file: [Parameter without Syslog](https://aka.ms/enable-monitoring-msi-bicep-parameters)
-
-2. Edit the following values in the parameter file:
-
- | Parameter | Description |
- |:|:|
- | `aksResourceId` | Use the values on the AKS Overview page for the AKS cluster. |
- | `aksResourceLocation` | Use the values on the AKS Overview page for the AKS cluster. |
- | `workspaceResourceId` | Use the resource ID of your Log Analytics workspace. |
- | `workspaceRegion` | Use the location of your Log Analytics workspace. |
- | `resourceTagValues` | Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values. |
- | `enabledContainerLogV2` | Set this parameter value to be true to use the default recommended ContainerLogV2 schema
- | Cost optimization parameters | Refer to [Data collection parameters](container-insights-cost-config.md#data-collection-parameters) |
-
-3. Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
--
-### New cluster
-Replace and use the managed cluster resources in [Deploy an Azure Kubernetes Service (AKS) cluster using Bicep](../../aks/learn/quick-kubernetes-deploy-bicep.md).
---
-## [Terraform](#tab/terraform)
-
-### New AKS cluster
-
-1. Download Terraform template file depending on whether you want to enable Syslog collection.
-
- **Syslog**
- - [https://aka.ms/enable-monitoring-msi-syslog-terraform](https://aka.ms/enable-monitoring-msi-syslog-terraform)
-
- **No Syslog**
- - [https://aka.ms/enable-monitoring-msi-terraform](https://aka.ms/enable-monitoring-msi-terraform)
-
-2. Adjust the `azurerm_kubernetes_cluster` resource in *main.tf* based on what cluster settings you're going to have.
-3. Update parameters in *variables.tf* to replace values in "<>"
-
- | Parameter | Description |
- |:|:|
- | `aks_resource_group_name` | Use the values on the AKS Overview page for the resource group. |
- | `resource_group_location` | Use the values on the AKS Overview page for the resource group. |
- | `cluster_name` | Define the cluster name that you would like to create. |
- | `workspace_resource_id` | Use the resource ID of your Log Analytics workspace. |
- | `workspace_region` | Use the location of your Log Analytics workspace. |
- | `resource_tag_values` | Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values. |
- | `enabledContainerLogV2` | Set this parameter value to be true to use the default recommended ContainerLogV2. |
- | Cost optimization parameters | Refer to [Data collection parameters](container-insights-cost-config.md#data-collection-parameters) |
--
-4. Run `terraform init -upgrade` to initialize the Terraform deployment.
-5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
-6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
--
-### Existing AKS cluster
-1. Import the existing cluster resource first with the command: ` terraform import azurerm_kubernetes_cluster.k8s <aksResourceId>`
-2. Add the oms_agent add-on profile to the existing azurerm_kubernetes_cluster resource.
- ```
- oms_agent {
- log_analytics_workspace_id = var.workspace_resource_id
- msi_auth_for_monitoring_enabled = true
- }
- ```
-3. Copy the DCR and DCRA resources from the Terraform templates
-4. Run `terraform plan -out main.tfplan` and make sure the change is adding the oms_agent property. Note: If the `azurerm_kubernetes_cluster` resource defined is different during terraform plan, the existing cluster will get destroyed and recreated.
-5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
-
-> [!TIP]
-> - Edit the `main.tf` file appropriately before running the terraform template
-> - Data will start flowing after 10 minutes since the cluster needs to be ready first
-> - WorkspaceID needs to match the format `/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/example-resource-group/providers/Microsoft.OperationalInsights/workspaces/workspaceValue`
-> - If resource group already exists, run `terraform import azurerm_resource_group.rg /subscriptions/<Subscription_ID>/resourceGroups/<Resource_Group_Name>` before terraform plan
-
-### [Azure Policy](#tab/policy)
-
-1. Download Azure Policy template and parameter files depending on whether you want to enable Syslog collection.
-
- - Template file: [https://aka.ms/enable-monitoring-msi-azure-policy-template](https://aka.ms/enable-monitoring-msi-azure-policy-template)
- - Parameter file: [https://aka.ms/enable-monitoring-msi-azure-policy-parameters](https://aka.ms/enable-monitoring-msi-azure-policy-parameters)
-
-2. Create the policy definition using the following command:
-
- ```
- az policy definition create --name "AKS-Monitoring-Addon-MSI" --display-name "AKS-Monitoring-Addon-MSI" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules azure-policy.rules.json --params azure-policy.parameters.json
- ```
-
-3. Create the policy assignment using the following CLI command or any [other available method](../../governance/policy/assign-policy-portal.md).
-
- ```
- az policy assignment create --name aks-monitoring-addon --policy "AKS-Monitoring-Addon-MSI" --assign-identity --identity-scope /subscriptions/<subscriptionId> --role Contributor --scope /subscriptions/<subscriptionId> --location <location> --role Contributor --scope /subscriptions/<subscriptionId> -p "{ \"workspaceResourceId\": { \"value\": \"/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>\" } }"
- ```
-
-> [!TIP]
-> - Make sure when performing remediation task, the policy assignment has access to workspace you specified.
-> - Download all files under *AddonPolicyTemplate* folder before running the policy template.
---
-## Verify agent and solution deployment
-You can verify that the agent is deployed properly using the [kubectl command line tool](../../aks/learn/quick-kubernetes-deploy-cli.md#connect-to-the-cluster).
-
-```
-kubectl get ds ama-logs --namespace=kube-system
-```
-
-The output should resemble the following example, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
-```
-
-If there are Windows Server nodes on the cluster, run the following command to verify that the agent is deployed successfully:
-
-```
-kubectl get ds ama-logs-windows --namespace=kube-system
-```
-
-The output should resemble the following example, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
-NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-ama-logs-windows 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
-```
-
-To verify deployment of the solution, run the following command:
-
-```
-kubectl get deployment ama-logs-rs -n=kube-system
-```
-
-The output should resemble the following example, which indicates that it was deployed properly:
-
-```output
-User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-ama-logs-rs 1 1 1 1 3h
-```
-
-## View configuration with CLI
-
-Use the `aks show` command to find out whether the solution is enabled or not, what the Log Analytics workspace resource ID is, and summary information about the cluster.
-
-```azurecli
-az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster>
-```
-
-The command will return JSON-formatted information about the solution. The `addonProfiles` section should include information on the `omsagent` as in the following example:
-
-```output
-"addonProfiles": {
- "omsagent": {
- "config": {
- "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>"
- },
- "enabled": true
- }
- }
-```
---
-## Limitations
--- Dependency on DCR/DCRA for region availability. For new AKS region, there might be chances that DCR is still not supported in the new region. In that case, onboarding Container Insights with MSI will fail. One workaround is to onboard to Container Insights through CLI with the old way (with the use of Container Insights solution)-- You must be on a machine on the same private network to access live logs from a private cluster.-
-## Next steps
-
-* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
-* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
- Title: Monitor Azure Arc-enabled Kubernetes clusters Previously updated : 08/02/2023--
-description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
---
-# Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters
-
-[Azure Monitor Container Insights](container-insights-overview.md) provides rich monitoring experience for Azure Arc-enabled Kubernetes clusters.
--
-## Supported configurations
--- Azure Monitor Container Insights supports monitoring Azure Arc-enabled Kubernetes as described in the [Overview](container-insights-overview.md) article, except the live data feature. Also, users aren't required to have [Owner](../../role-based-access-control/built-in-roles.md#owner) permissions to [enable metrics](container-insights-update-metrics.md)-- `Docker`, `Moby`, and CRI compatible container runtimes such `CRI-O` and `containerd`.-- Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.-
->[!NOTE]
->If you are migrating from Container Insights on Azure Red Hat OpenShift v4.x, please also ensure that you have [disabled monitoring](./container-insights-optout-hybrid.md) before proceeding with configuring Container Insights on Azure Arc enabled Kubernetes to prevent any installation issues.
->
--
-## Prerequisites
--- Pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).-- Log Analytics workspace. Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace using [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).-- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the resource group containing the Log Analytics Workspace-- To view the monitoring data, you need to have [Monitoring Reader](../roles-permissions-security.md#monitoring-reader) or [Monitoring Contributor](../roles-permissions-security.md#monitoring-contributor) role.-- Verify the [firewall requirements for Container insights](./container-insights-onboard.md#network-firewall-requirements) in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md).-- If you are using an Arc enabled cluster on AKS, and previously installed [monitoring for AKS](./container-insights-enable-existing-clusters.md), please ensure that you have [disabled monitoring](./container-insights-optout.md) before proceeding to avoid issues during the extension install.-- If you had previously deployed Azure Monitor Container Insights on this cluster using script without cluster extensions, follow the instructions listed [here](container-insights-optout-hybrid.md) to delete this Helm chart. You can then continue to creating a cluster extension instance for Azure Monitor Container Insights.--
-### Identify workspace resource ID
-
-Run the following commands to locate the full Azure Resource Manager identifier of the Log Analytics workspace.
-
-1. List all the subscriptions that you have access to using the following command:
-
- ```azurecli
- az account list --all -o table
- ```
-
-2. Switch to the subscription hosting the Log Analytics workspace using the following command:
-
- ```azurecli
- az account set -s <subscriptionId of the workspace>
- ```
-
-3. The following example displays the list of workspaces in your subscriptions in the default JSON format.
-
- ```azurecli
- az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
- ```
-
- In the output, find the workspace name of interest. The `id` field of that represents the Azure Resource Manager identifier of that Log Analytics workspace.
-
- >[!TIP]
- > This `id` can also be found in the *Overview* pane of the Log Analytics workspace through the Azure portal.
-
-## Create extension instance
-
-## [CLI](#tab/create-cli)
-
-### Option 1 - With default values
-
-This option uses the following defaults:
--- Creates or uses existing default log analytics workspace corresponding to the region of the cluster-- Auto-upgrade is enabled for the Azure Monitor cluster extension-
->[!NOTE]
-> Managed identity authentication is the default in k8s-extension version 1.43.0 or higher.
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers
-```
-
-To use [managed identity authentication](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following:
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
-```
-
->[!NOTE]
-> Managed identity authentication is not supported for Arc-enabled Kubernetes clusters with ARO (Azure Red Hat Openshift) or Windows nodes.
->
-
-To use legacy/non-managed identity authentication to create an extension instance on **Arc K8S connected clusters with ARO**, use the commands below that don't use managed identity. Non-cli onboarding is not supported for Arc-enabled Kubernetes clusters with **ARO**.
-
-Install the extension with **amalogs.useAADAuth=false**.
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=false
-```
-
-### Option 2 - With existing Azure Log Analytics workspace
-
-You can use an existing Azure Log Analytics workspace in any subscription on which you have *Contributor* or a more permissive role assignment.
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings logAnalyticsWorkspaceResourceID=<armResourceIdOfExistingWorkspace>
-```
-
-### Option 3 - With advanced configuration
-
-If you want to tweak the default resource requests and limits, you can use the advanced configurations settings:
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.resources.daemonset.limits.cpu=150m amalogs.resources.daemonset.limits.memory=600Mi amalogs.resources.deployment.limits.cpu=1 amalogs.resources.deployment.limits.memory=750Mi
-```
-
-Check out the [resource requests and limits section of Helm chart](https://github.com/microsoft/Docker-Provider/blob/ci_prod/charts/azuremonitor-containers/values.yaml) for the available configuration settings.
-
-### Option 4 - On Azure Stack Edge
-
-If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used.
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=/home/data/docker
-```
-
-### Option 5 - With Azure Monitor Private Link Scope (AMPLS) + Proxy
-
-If the cluster is configured with a forward proxy, then proxy settings are automatically applied to the extension. In the case of a cluster with AMPLS + proxy, proxy config should be ignored. Onboard the extension with the configuration setting `amalogs.ignoreExtensionProxySettings=true`.
-
-```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.ignoreExtensionProxySettings=true
-```
-
->[!NOTE]
-> If you are explicitly specifying the version of the extension to be installed in the create command, then ensure that the version specified is >= 2.8.2.
-
-## [Azure portal](#tab/create-portal)
-
->[!IMPORTANT]
-> If you are deploying Azure Monitor on a Kubernetes cluster running on top of Azure Stack Edge, then the Azure CLI option needs to be followed instead of the Azure portal option as a custom mount path needs to be set for these clusters.
-
-### Onboarding from the Azure Arc-enabled Kubernetes resource pane
-
-1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster that you wish to monitor.
-
-2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
-
-3. On the onboarding page, select the 'Configure Azure Monitor' button
-
-4. You can now choose the [Log Analytics workspace](../logs/quick-create-workspace.md) to send your metrics and logs data to.
-
-5. To use managed identity authentication, select the *Use managed identity* checkbox.
-
-6. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
-
-### Onboarding from Azure Monitor pane
-
-1. In the Azure portal, navigate to the 'Monitor' pane, and select the 'Containers' option under the 'Insights' menu.
-
-2. Select the 'Unmonitored clusters' tab to view the Azure Arc-enabled Kubernetes clusters that you can enable monitoring for.
-
-3. Click on the 'Enable' link next to the cluster that you want to enable monitoring for.
-
-4. Choose the Log Analytics workspace.
-
-5. To use managed identity authentication, select the *Use managed identity* checkbox.
-
-6. Select the 'Configure' button to continue.
-
-## [ARM](#tab/create-arm)
-
-This sections has instructions for onboarding with legacy authentication. For MSI based onboarding, see next tab.
-
-1. Download Azure Resource Manager template and parameter:
-
- ```console
- curl -L https://aka.ms/arc-k8s-azmon-extension-arm-template -o arc-k8s-azmon-extension-arm-template.json
- curl -L https://aka.ms/arc-k8s-azmon-extension-arm-template-params -o arc-k8s-azmon-extension-arm-template-params.json
- ```
-
-2. Update parameter values in arc-k8s-azmon-extension-arm-template-params.json file. For Azure public cloud, `opinsights.azure.com` needs to be used as the value of workspaceDomain and for AzureUSGovernment, `opinsights.azure.us` needs to be used as the value of workspaceDomain.
-
-3. Deploy the template to create Azure Monitor Container Insights extension
-
- ```azurecli
- az login
- az account set --subscription "Subscription Name"
- az deployment group create --resource-group <resource-group> --template-file ./arc-k8s-azmon-extension-arm-template.json --parameters @./arc-k8s-azmon-extension-arm-template-params.json
- ```
-
-## [ARM (with MSI)](#tab/create-arm-msi)
-
-Onboard using an ARM template with MSI based authentication enabled
-
-1. Download Azure Resource Manager template and parameter:
-
- ```console
- curl -L https://aka.ms/arc-k8s-azmon-extension-msi-arm-template -o arc-k8s-azmon-extension-arm-template.json
- curl -L https://aka.ms/arc-k8s-azmon-extension-msi-arm-template-params -o arc-k8s-azmon-extension-arm-template-params.json
- ```
-
-2. Update parameter values in arc-k8s-azmon-extension-arm-template-params.json file. For Azure public cloud, `opinsights.azure.com` needs to be used as the value of workspaceDomain and for AzureUSGovernment, `opinsights.azure.us` needs to be used as the value of workspaceDomain.
-
-3. Deploy the template to create Azure Monitor Container Insights extension
-
- ```azurecli
- az login
- az account set --subscription "Subscription Name"
- az deployment group create --resource-group <resource-group> --template-file ./arc-k8s-azmon-extension-arm-template.json --parameters @./arc-k8s-azmon-extension-arm-template-params.json
- ```
---
-## Verify extension installation status
-Once you have successfully created the Azure Monitor extension for your Azure Arc-enabled Kubernetes cluster, you can additionally check the status of installation using the Azure portal or CLI. Successful installations should show the status as 'Installed'. If your status is showing 'Failed' or remains in the 'Pending' state for long periods of time, proceed to the Troubleshooting section below.
-
-### [Azure portal](#tab/verify-portal)
-1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster with the extension installing
-2. From the resource pane on the left, select the 'Extensions' item under the 'Settings' section.
-3. You should see an extension with the name 'azuremonitor-containers' listed, with the listed status in the 'Install status' column
-### [CLI](#tab/verify-cli)
-Run the following command to show the latest status of the `Microsoft.AzureMonitor.Containers` extension
-```azurecli
-az k8s-extension show --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters -n azuremonitor-containers
-```
----
-## Delete extension instance
-
-The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data within the Log Analytics resource is left intact.
-
-```azurecli
-az k8s-extension delete --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group>
-```
-
-## Disconnected cluster
-If your cluster is disconnected from Azure for > 48 hours, then Azure Resource Graph won't have information about your cluster. As a result the Insights pane may display incorrect information about your cluster state.
-
-## Troubleshooting
-For issues with enabling monitoring, we have provided a [troubleshooting script](https://aka.ms/azmon-ci-troubleshooting) to help diagnose any problems.
-
-## Next steps
--- With monitoring enabled to collect health and resource utilization of your Azure Arc-enabled Kubernetes cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.--- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file.--- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)
azure-monitor Container Insights Enable Provisioned Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-provisioned-clusters.md
- Title: Monitor AKS hybrid clusters Previously updated : 01/10/2023-
-description: Collect metrics and logs of AKS hybrid clusters using Azure Monitor.
---
-# Azure Monitor container insights for Azure Kubernetes Service (AKS) hybrid clusters (preview)
-
->[!NOTE]
->Support for monitoring AKS hybrid clusters is currently in preview. We recommend only using preview features in safe testing environments.
-
-[Azure Monitor container insights](./container-insights-overview.md) provides a rich monitoring experience for [AKS hybrid clusters (preview)](/azure/aks/hybrid/aks-hybrid-options-overview). This article describes how to set up Container insights to monitor an AKS hybrid cluster.
-
-## Supported configurations
--- Azure Monitor container insights supports monitoring only Linux containers.-
-## Prerequisites
--- Pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).-- Log Analytics workspace. Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace using [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).-- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.-- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace.-- The following endpoints need to be enabled for outbound access in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md).-- Azure CLI version 2.43.0 or higher-- Azure k8s-extension version 1.3.7 or higher-- Azure Resource-graph version 2.1.0-
-## Onboarding
-
-## [CLI](#tab/create-cli)
-
-```acli
-az login
-
-az account set --subscription <cluster-subscription-name>
-
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
-```
-## [Azure portal](#tab/create-portal)
-
-### Onboarding from the AKS hybrid resource pane
-
-1. In the Azure portal, select the AKS hybrid cluster that you wish to monitor.
-
-2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
-
-3. On the onboarding page, select the 'Configure Azure Monitor' button
-
-4. You can now choose the [Log Analytics workspace](../logs/quick-create-workspace.md) to send your metrics and logs data to.
-
-5. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
-
-### Onboarding from Azure Monitor pane
-
-1. In the Azure portal, navigate to the 'Monitor' pane, and select the 'Containers' option under the 'Insights' menu.
-
-2. Select the 'Unmonitored clusters' tab to view the AKS hybrid clusters that you can enable monitoring for.
-
-3. Click on the 'Enable' link next to the cluster that you want to enable monitoring for.
-
-4. Choose the Log Analytics workspace.
-
-5. Select the 'Configure' button to continue.
--
-## [Resource Manager](#tab/create-arm)
-
-1. Download the Azure Resource Manager Template and Parameter files
-
-```bash
-curl -L https://aka.ms/existingClusterOnboarding.json -o existingClusterOnboarding.json
-```
-
-```bash
-curl -L https://aka.ms/existingClusterParam.json -o existingClusterParam.json
-```
-
-2. Edit the values in the parameter file.
-
- - For clusterResourceId and clusterRegion, use the values on the Overview page for the LCM cluster
- - For workspaceResourceId, use the resource ID of your Log Analytics workspace
- - For workspaceRegion, use the Location of your Log Analytics workspace
- - For workspaceDomain, use the workspace domain value as ΓÇ£opinsights.azure.comΓÇ¥ for public cloud and for Microsoft Azure operated by 21Vianet cloud as ΓÇ£opinsights.azure.cnΓÇ¥
- - For resourceTagValues, leave as empty if not specific
-
-3. Deploy the ARM template
-
-```azurecli
-az login
-
-az account set --subscription <cluster-subscription-name>
-
-az deployment group create --resource-group <resource-group> --template-file ./existingClusterOnboarding.json --parameters existingClusterParam.json
-```
--
-## Validation
-
-### Extension details
-
-Showing the extension details:
-
-```azcli
-az k8s-extension list --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice"
-```
--
-## Delete extension
-
-The command for deleting the extension:
-
-```azcli
-az k8s-extension delete --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --name azuremonitor-containers --yes
-```
-
-## Known Issues/Limitations
--- Windows containers are not supported currently
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
Supported API definitions for the Azure Stack Hub cluster can be found in the ex
## Configure agent data collection
-Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
+Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-data-collection-configmap.md).
After you've successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
For help with setting up or troubleshooting the Live Data feature, see the [Setu
## View AKS resource live logs
+> [!NOTE]
+> You must be on a machine on the same private network to access live logs from a private cluster.
+ To view the live logs for pods, deployments, replica sets, stateful sets, daemon sets, and jobs with or without Container insights from the AKS resource view: 1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
ContainerInventory
### Kubernetes events > [!NOTE]
-> By default, Normal event types aren't collected, so you won't see them when you query the KubeEvents table unless the *collect_all_kube_events* ConfigMap setting is enabled. If you need to collect Normal events, enable *collect_all_kube_events setting* in the *container-azm-ms-agentconfig* ConfigMap. See [Configure agent data collection for Container insights](./container-insights-agent-config.md) for information on how to configure the ConfigMap.
+> By default, Normal event types aren't collected, so you won't see them when you query the KubeEvents table unless the *collect_all_kube_events* ConfigMap setting is enabled. If you need to collect Normal events, enable *collect_all_kube_events setting* in the *container-azm-ms-agentconfig* ConfigMap. See [Configure agent data collection for Container insights](./container-insights-data-collection-configmap.md) for information on how to configure the ConfigMap.
``` kusto
KubePodInventory
## Container logs
-Container logs for AKS are stored in [the ContainerLogV2 table](./container-insights-logging-v2.md). You can run the following sample queries to look for the stderr/stdout log output from target pods, deployments, or namespaces.
+Container logs for AKS are stored in [the ContainerLogV2 table](./container-insights-logs-schema.md). You can run the following sample queries to look for the stderr/stdout log output from target pods, deployments, or namespaces.
### Container logs for a specific pod, namespace, and container
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
- Title: Configure the ContainerLogV2 schema for Container Insights
-description: Switch your ContainerLog table to the ContainerLogV2 schema.
----- Previously updated : 08/28/2023---
-# Enable the ContainerLogV2 schema
-Azure Monitor Container insights offers a schema for container logs, called ContainerLogV2. As part of this schema, there are fields to make common queries to view Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes data. In addition, this schema is compatible with [Basic Logs](../logs/basic-logs-configure.md), which offers a low-cost alternative to standard analytics logs.
-
->[!NOTE]
-> ContainerLogV2 will be the default schema via the ConfigMap for CLI version 2.54.0 and greater. ContainerLogV2 will be default ingestion format for customers who will be onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy and Portal onboarding. ContainerLogV2 can be explicitly enabled through CLI version 2.51.0 or higher using Data collection settings.
-
-The new fields are:
-* `ContainerName`
-* `PodName`
-* `PodNamespace`
-
-## ContainerLogV2 schema
-```kusto
- Computer: string,
- ContainerId: string,
- ContainerName: string,
- PodName: string,
- PodNamespace: string,
- LogMessage: dynamic,
- LogSource: string,
- TimeGenerated: datetime
-```
-
->[!NOTE]
-> [Export](../logs/logs-data-export.md) to Event Hub and Storage Account is not supported if the incoming LogMessage is not a valid JSON. For best performance, we recommend emitting container logs in JSON format.
-
-## Enable the ContainerLogV2 schema
-Customers can enable the ContainerLogV2 schema at the cluster level through either the cluster's Data Collection Rule (DCR) or ConfigMap. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview).
-Follow the instructions to configure an existing ConfigMap or to use a new one.
-
->[!NOTE]
-> Because ContainerLogV2 can be enabled through either the DCR and ConfigMap, when both are enabled the ContainerLogV2 setting of the ConfigMap will take precedence. Stdout and stderr logs will only be ingested to the ContainerLog table when both the DCR and ConfigMap are explicitly set to off.
-
-### Configure via an existing Data Collection Rule (DCR)
-
-## [Azure portal](#tab/configure-portal)
-
->[!NOTE]
-> DCR based configuration is not supported for service principal based clusters. Please [migrate your clusters with service principal to managed identity](./container-insights-authentication.md) to use this experience.
-
-1. In the Insights section of your Kubernetes cluster, select the **Monitoring Settings** button from the top toolbar
--
-2. Select **Edit collection settings** to open the advanced settings
--
-3. Select the checkbox with **Enable ContainerLogV2** and choose the **Save** button below
--
-4. The summary section should display the message "ContainerLogV2 enabled", click the **Configure** button to complete your configuration change
--
-## [CLI](#tab/configure-CLI)
-
-1. For configuring via CLI, use the corresponding [config file](./container-insights-cost-config.md#enable-cost-settings), update the `enableContainerLogV2` field in the config file to be true.
---
-
-### Configure an existing ConfigMap
-This applies to the scenario where you have already enabled container insights for your AKS cluster and have [configured agent data collection settings](./container-insights-agent-config.md#configure-and-deploy-configmaps) using ConfigMap "_container-azm-ms-agentconfig.yaml_". If this ConfigMap doesn't yet have the `log_collection_settings.schema` field, you'll need to append the following section in this existing ConfigMap .yaml file:
-
-```yaml
-[log_collection_settings.schema]
- # In the absence of this ConfigMap, the default value for containerlog_schema_version is "v1"
- # Supported values for this setting are "v1","v2"
- # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
- containerlog_schema_version = "v2"
-```
-
-### Configure a new ConfigMap
-1. [Download the new ConfigMap](https://aka.ms/container-azm-ms-agentconfig). For the newly downloaded ConfigMap, the default value for `containerlog_schema_version` is `"v2"`.
-1. Ensure that the `containerlog_schema_version` to `"v2"` and the `[log_collection_settings.schema]` is also uncommented by removing the `#` preceding it:
-
- ```yaml
- [log_collection_settings.schema]
- # In the absence of this ConfigMap, the default value for containerlog_schema_version is "v1"
- # Supported values for this setting are "v1","v2"
- # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
- containerlog_schema_version = "v2"
- ```
-
-3. After you finish configuring the ConfigMap, run the following kubectl command: `kubectl apply -f <configname>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`
-
->[!NOTE]
->* The configuration change can take a few minutes to complete before it takes effect. All ama-logs pods in the cluster will restart.
->* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time.
-
-## Multi-line logging in Container Insights
-Azure Monitor container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits.
-Additionally, the feature also adds support for .NET, Go, Python and Java stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table.
-
-Below are two screenshots which demonstrate Multi-line logging at work for Go exception stack trace:
-
-Multi-line logging disabled scenario:
-<!-- convertborder later -->
-
-Multi-line logging enabled scenario:
-<!-- convertborder later -->
-
-Similarly, below screenshots depict Multi-line logging enabled scenarios for Java and Python stack traces:
-
-For Java:
--
-For Python:
--
-### Pre-requisites
-
-Customers must [enable ContainerLogV2](./container-insights-logging-v2.md#enable-the-containerlogv2-schema) for multi-line logging to work.
-
-### How to enable
-Multi-line logging feature can be enabled by setting **enabled** flag to "true" under the `[log_collection_settings.enable_multiline_logs]` section in the [config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml)
-
-```yaml
-[log_collection_settings.enable_multiline_logs]
-# fluent-bit based multiline log collection for go (stacktrace), dotnet (stacktrace)
-# if enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line)
- enabled = "true"
-```
-
-## Next steps
-* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
-* Learn how [query data](./container-insights-log-query.md#container-logs) from ContainerLogV2
-
azure-monitor Container Insights Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logs-schema.md
+
+ Title: Configure the ContainerLogV2 schema for Container Insights
+description: Switch your ContainerLog table to the ContainerLogV2 schema.
+++++ Last updated : 08/28/2023+++
+# Container insights log schema
+Container insights stores log data it collects in a table called *ContainerLogV2*. This article describes the schema of this table and its comparison and migration from the legacy *ContainerLog* table.
++
+>[!IMPORTANT]
+> ContainerLogV2 will be the default schema via the ConfigMap for CLI version 2.54.0 and greater. ContainerLogV2 will be default ingestion format for customers who will be onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy and Portal onboarding. ContainerLogV2 can be explicitly enabled through CLI version 2.51.0 or higher using Data collection settings.
+>
+> Support for the *ContainerLog* table will be retired on 30th September 2026.
+
+
+## Table comparison
+The following table highlights the key differences between using ContainerLogV2 and ContainerLog schema.
+
+| Feature differences | ContainerLog | ContainerLogV2 |
+| - | -- | - |
+| Schema | Details at [ContainerLog](/azure/azure-monitor/reference/tables/containerlog). | Details at [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2).<br>Additional columns are:<br>- `ContainerName`<br>- `PodName`<br>- `PodNamespace`. |
+| Onboarding | Only configurable through ConfigMap. | Configurable through both ConfigMap and DCR. <sup>1</sup>|
+| Pricing | Only compatible with full-priced analytics logs. | Supports the low cost [basic logs](../logs/basic-logs-configure.md) tier in addition to analytics logs. |
+| Querying | Requires multiple join operations with inventory tables for standard queries. | Includes additional pod and container metadata to reduce query complexity and join operations. |
+| Multiline | Not supported, multiline entries are split into multiple rows. | Support for multiline logging to allow consolidated, single entries for multiline output. |
+
+<sup>1</sup>DCR configuration not supported for clusters using service principal authentication based clusters. [Migrate your clusters with service principal to managed identity](./container-insights-authentication.md) to use this experience.
+
+>[!NOTE]
+> [Export](../logs/logs-data-export.md) to Event Hub and Storage Account is not supported if the incoming LogMessage is not a valid JSON. For best performance, we recommend emitting container logs in JSON format.
++
+## Assess the impact on existing alerts
+Before you enable the **ContainerLogsV2** schema, you should assess whether you have any alert rules that rely on the **ContainerLog** table. Any such alerts will need to be updated to use the new table.
+
+To scan for alerts that reference the **ContainerLog** table, run the following Azure Resource Graph query:
+
+```Kusto
+resources
+| where type in~ ('microsoft.insights/scheduledqueryrules') and ['kind'] !in~ ('LogToMetric')
+| extend severity = strcat("Sev", properties["severity"])
+| extend enabled = tobool(properties["enabled"])
+| where enabled in~ ('true')
+| where tolower(properties["targetResourceTypes"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["targetResourceType"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["scopes"]) matches regex 'providers/microsoft.operationalinsights/workspaces($|/.*)?'
+| where properties contains "ContainerLog"
+| project id,name,type,properties,enabled,severity,subscriptionId
+| order by tolower(name) asc
+```
+
+## Enable the ContainerLogV2 schema
+You can enable the **ContainerLogV2** schema for a cluster either using the cluster's Data Collection Rule (DCR) or ConfigMap. If both settings are enabled, the ConfigMap will take precedence. Stdout and stderr logs will only be ingested to the ContainerLog table when both the DCR and ConfigMap are explicitly set to off.
+++
+## Multi-line logging in Container Insights
+With multiline logging enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. If the stitched log line is larger than 64 KB, it will be truncated due to Log Analytics workspace limits. This feature also has support for .NET, Go, Python and Java stack traces, which appear as single entries in the ContainerLogV2 table.
+
+The following screenshots show multi-line logging for Go exception stack trace:
+
+**Multi-line logging disabled**
+
+<!-- convertborder later -->
+
+**Multi-line logging enabled**
+
+<!-- convertborder later -->
+
+**Java stack trace**
++
+**Python stack trace**
+++
+## Next steps
+* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
+* Learn how [query data](./container-insights-log-query.md#container-logs) from ContainerLogV2
+
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Title: Manage the Container insights agent | Microsoft Docs
-description: This article describes how to manage the most common maintenance tasks with the containerized Log Analytics agent used by Container insights.
+ Title: Manage the Container insights agent
+description: Describes how to manage the most common maintenance tasks with the containerized Log Analytics agent used by Container insights.
Previously updated : 07/21/2020 Last updated : 12/19/2023
Container Insights uses a containerized version of the Log Analytics agent for Linux. After initial deployment, you might need to perform routine or optional tasks during its lifecycle. This article explains how to manually upgrade the agent and disable collection of environmental variables from a particular container.
->[!NOTE]
->The Container Insights agent name has changed from OMSAgent to Azure Monitor Agent, along with a few other resource names. This article reflects the new name. Update your commands, alerts, and scripts that reference the old name. Read more about the name change in [our blog post](https://techcommunity.microsoft.com/t5/azure-monitor-status-archive/name-update-for-agent-and-associated-resources-in-azure-monitor/ba-p/3576810).
->
+> [!NOTE]
+> If you've already deployed an AKS cluster and enabled monitoring by using either the Azure CLI or a Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
+ ## Upgrade the Container insights agent
If the agent upgrade fails for a cluster hosted on AKS, this article also descri
### Upgrade the agent on an AKS cluster
-The process to upgrade the agent on an AKS cluster consists of two steps. The first step is to disable monitoring with Container insights by using the Azure CLI. Follow the steps described in the [Disable monitoring](container-insights-optout.md?#azure-cli) article. By using the Azure CLI, you can remove the agent from the nodes in the cluster without affecting the solution and the corresponding data that's stored in the workspace.
+The process to upgrade the agent on an AKS cluster consists of two steps. The first step is to disable monitoring with Container insights by using the Azure CLI. Follow the steps described in [Disable Container insights on your Kubernetes cluster](kubernetes-monitoring-disable.md) article. By using the Azure CLI, you can remove the agent from the nodes in the cluster without affecting the solution and the corresponding data that's stored in the workspace.
>[!NOTE] >While you're performing this maintenance activity, the nodes in the cluster aren't forwarding collected data. Performance views won't show data between the time you removed the agent and installed the new version.
This command opens your default text editor. After you set the variable, save th
To verify the configuration change took effect, select a container in the **Containers** view in Container insights. In the property pane, expand **Environment Variables**. The section should show only the variable created earlier, which is `AZMON_COLLECT_ENV=FALSE`. For all other containers, the **Environment Variables** section should list all the environment variables discovered. To reenable discovery of the environmental variables, apply the same process you used earlier and change the value from `False` to `True`. Then rerun the `kubectl` command to update the container.- ```yaml - name: AZMON_COLLECT_ENV value: "True" ``` ## Semantic version update of container insights agent version
-Container Insights has shifted the image version and naming convention to [semver format] (https://semver.org/). SemVer helps developers keep track of every change made to a software during its development phase and ensures that the software versioning is consistent and meaningful. The old version was in format of ciprod\<timestamp\>-\<commitId\> and win-ciprod\<timestamp\>-\<commitId\>, our first image versions using the Semver format are 3.1.4 for Linux and win-3.1.4 for Windows.
+Container Insights has shifted the image version and naming convention to [semver format] (https://semver.org/). SemVer helps developers keep track of every change made to software during its development phase and ensures that the software versioning is consistent and meaningful. The old version was in format of ciprod\<timestamp\>-\<commitId\> and win-ciprod\<timestamp\>-\<commitId\>, our first image versions using the Semver format are 3.1.4 for Linux and win-3.1.4 for Windows.
Semver is a universal software versioning schema that's defined in the format MAJOR.MINOR.PATCH, which follows the following constraints:
With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to
## Repair duplicate agents
-Customers who manually enable Container Insights using custom methods prior to October 2022 can end up with multiple versions of our agent running together. To clear this duplication, customers are recommended to follow the steps below:
+If you manually enabled Container Insights using custom methods prior to October 2022, you can end up with multiple versions of the agent running together. Follow the steps below to clear this duplication.
-### Migration guidelines for AKS clusters
-1. Get details of customer's custom settings, such as memory and CPU limits on omsagent containers.
+1. Gather details of any custom settings, such as memory and CPU limits on your omsagent containers.
-2. Review Resource Limits:
+2. Review default resource limits for ama-logs and determine if they meet your needs. If not, you may need to create a support topic to help investigate and toggle memory/cpu limits. This can help address the scale limitations issues that some customers encountered previously that resulted in OOMKilled exceptions.
-Current ama-logs default limit are below
+ | OS | Controller Name | Default Limits |
+ ||||
+ | Linux | ds-cpu-limit-linux | 500m |
+ | Linux | ds-memory-limit-linux | 750Mi |
+ | Linux | rs-cpu-limit | 1 |
+ | Linux | rs-memory-limit | 1.5Gi |
+ | Windows | ds-cpu-limit-windows | 500m |
+ | Windows | ds-memory-limit-windows | 1Gi |
-| OS | Controller Name | Default Limits |
-||||
-| Linux | ds-cpu-limit-linux | 500m |
-| Linux | ds-memory-limit-linux | 750Mi |
-| Linux | rs-cpu-limit | 1 |
-| Linux | rs-memory-limit | 1.5Gi |
-| Windows | ds-cpu-limit-windows | 500m |
-| Windows | ds-memory-limit-windows | 1Gi |
-
-Validate whether the current default settings and limits meet the customer's needs. And if not, create support tickets under containerinsights agent to help investigate and toggle memory/cpu limits for the customer. Through doing this, it can help address the scale limitations issues that some customers encountered previously that resulted in OOMKilled exceptions.
-
-3. Fetch current Azure analytic workspace ID since we're going to re-onboard the container insights.
-
-```console
-az aks show -g $resourceGroupNameofCluster -n $nameofTheCluster | grep logAnalyticsWorkspaceResourceID`
-```
4. Clean resources from previous onboarding:
-**For customers that previously onboarded to containerinsights through helm chart** :
-
-ΓÇó List all releases across namespaces with command:
-
-```console
- helm list --all-namespaces
-```
-
-ΓÇó Clean the chart installed for containerinsights (or azure-monitor-containers) with command:
-
-```console
-helm uninstall <releaseName> --namespace <Namespace>
-```
-
-**For customers that previously onboarded to containerinsights through yaml deployment** :
-
-ΓÇó Download previous custom deployment yaml file:
-
-```console
-curl -LO raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/kubernetes/omsagent.yaml
-```
-
-ΓÇó Clean the old omsagent chart:
-
-```console
-kubectl delete -f omsagent.yaml
-```
-
-5. Disable container insights to clean all related resources with aks command: [Disable Container insights on your Azure Kubernetes Service (AKS) cluster - Azure Monitor | Microsoft Learn](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-optout)
-
-```console
-az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG
-```
-
-6. Re-onboard to containerinsights with the workspace fetched from step 3 using [the steps outlined here](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#specify-a-log-analytics-workspace)
+ **If you previously onboarded using helm chart** :
+
+ List all releases across namespaces with the following command:
+
+ ```console
+ helm list --all-namespaces
+ ```
+
+ Clean the chart installed for Container insights with the following command:
+
+ ```console
+ helm uninstall <releaseName> --namespace <Namespace>
+ ```
+
+ **If you previously onboarded using yaml deployment** :
+
+ Download previous custom deployment yaml file with the following command:
+
+ ```console
+ curl -LO raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/kubernetes/omsagent.yaml
+ ```
+
+ Clean the old omsagent chart with the following command:
+
+ ```console
+ kubectl delete -f omsagent.yaml
+ ```
+
+5. Disable Container insights to clean all related resources using the guidance at [Disable Container insights on your Kubernetes cluster](../containers/kubernetes-monitoring-disable.md)
++
+6. Re-onboard to Container insights using the guidance at [Enable Container insights on your Kubernetes cluster](kubernetes-monitoring-enable.md)
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Create metric alert rules in Container insights (preview)
+ Title: Metric alert rules for Kubernetes clusters (preview)
description: Describes how to create recommended metric alerts rules for a Kubernetes cluster in Container insights. Last updated 03/13/2023
-# Metric alert rules in Container insights (preview)
+# Metric alert rules for Kubernetes clusters (preview)
Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides preconfigured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
The following metrics have unique behavior characteristics:
**Prometheus only** -- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. For details related to configuring your ConfigMap configuration file, see [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps). Collection of persistent volume metrics with claims in the `kube-system` namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. For more information, see [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings).
+- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. For details related to configuring your ConfigMap configuration file, see [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps). Collection of persistent volume metrics with claims in the `kube-system` namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. For more information, see [Metric collection settings](./container-insights-data-collection-configmap.md#data-collection-settings).
- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and Memory Working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. If you want to collect these metrics and analyze them from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. For details related to configuring your ConfigMap configuration file, see the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps). ## View alerts
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
- Title: Enable Container insights
-description: This article describes how to enable and configure Container insights so that you can understand how your container is performing and what performance-related issues have been identified.
-- Previously updated : 10/18/2023---
-# Enable Container insights
-
-This article provides an overview of the requirements and options that are available for enabling [Container insights](../containers/container-insights-overview.md) on your Kubernetes clusters. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using several supported methods.
-
-## Supported configurations
-
-Container insights supports the following environments:
-- [Azure Kubernetes Service (AKS)](../../aks/index.yml)-- Following [Azure Arc-enabled Kubernetes cluster distributions](../../azure-arc/kubernetes/validation-program.md):
- - AKS on Azure Stack HCI
- - AKS Edge Essentials
- - Canonical
- - Cluster API Provider on Azure
- - K8s on Azure Stack Edge
- - Red Hat OpenShift version 4.x
- - SUSE Rancher (Rancher Kubernetes engine)
- - SUSE Rancher K3s
- - VMware (ie. TKG)
-
-> [!NOTE]
-> Container insights supports ARM64 nodes on AKS. See [Cluster requirements](../../azure-arc/kubernetes/system-requirements.md#cluster-requirements) for the details of Azure Arc-enabled clusters that support ARM64 nodes.
--
-## Prerequisites
--- Container insights stores its data in a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). It supports workspaces in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md). You can let the onboarding experience create a Log Analytics workspace in the default resource group of the AKS cluster subscription. If you already have a workspace, you'll probably want to use that one. For more information, see [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).-- Permissions
- - To enable Container insights, you require must have at least [Contributor](../../role-based-access-control/built-in-roles.md#contributor) access to the AKS cluster.
- - To view data after container monitoring is enabled, you must have [Monitoring Reader](../roles-permissions-security.md#monitoring-reader) or [Monitoring Contributor](../roles-permissions-security.md#monitoring-contributor) role.
-
-## Authentication
-
-Container insights uses managed identity authentication. This authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. Read more in [Authentication for Container Insights](container-insights-authentication.md) including guidance on migrating from legacy authentication models.
-
-> [!Note]
-> [ContainerLogV2](container-insights-logging-v2.md) is the default schema when you onboard Container insights with using ARM, Bicep, Terraform, Policy and Portal onboarding. ContainerLogV2 can be explicitly enabled through CLI version 2.51.0 or higher using Data collection settings.
--
-## Agent
-
-Container insights relies on a containerized [Azure Monitor agent](../agents/agents-overview.md) for Linux. This specialized agent collects performance and event data from all nodes in the cluster and sends it to a Log Analytics workspace. The agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
-
-### Data collection rule
-[Data collection rules (DCR)](../essentials/data-collection-rule-overview.md) contain the definition of data that should be collected by Azure Monitor agent. When you enable Container insights on a cluster, a DCR is created with the name *MSCI-\<cluster-region\>-<\cluster-name\>*. Currently, this name can't be modified.
-
-Since March 1, 2023 Container insights uses a semver compliant agent version. The agent version is *mcr.microsoft.com/azuremonitor/containerinsights/ciprod:3.1.4* or later. It's represented by the format mcr.microsoft.com/azuremonitor/containerinsights/ciprod:\<semver compatible version\>. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on AKS. To track which versions are released, see [Agent release announcements](https://github.com/microsoft/Docker-Provider/blob/ci_prod/ReleaseNotes.md).
-
-> [!NOTE]
-> Ingestion Transformations are not currently supported with the [Container insights DCR](../essentials/data-collection-transformations.md).
--
-### Log Analytics agent
-
-When Container insights doesn't use managed identity authentication, it relies on a containerized [Log Analytics agent for Linux](../agents/log-analytics-agent.md). The agent version is *microsoft/oms:ciprod04202018* or later. It's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on AKS. To track which versions are released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
-
-With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemon set pod on each individual Windows Server node to collect logs and forward them to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor for all Windows nodes in the cluster.
-
-> [!NOTE]
-> If you've already deployed an AKS cluster and enabled monitoring by using either the Azure CLI or a Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
--
-## Differences between Windows and Linux clusters
-
-The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
--- Windows doesn't have a Memory RSS metric. As a result, it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.-- Disk storage capacity information isn't available for Windows nodes.-- Only pod environments are monitored, not Docker environments.-- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.-
->[!NOTE]
-> Container insights support for the Windows Server 2022 operating system is in preview.
--
-The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet secure port (10250) within the cluster to collect node and container performance-related metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows node and container performance-related metrics collection to work.
-
-If you have a Kubernetes cluster with Windows nodes, review and configure the network security group and network policies to make sure the Kubelet secure port (:10250) is open for both inbound and outbound in the cluster's virtual network.
--
-## Network firewall requirements
-
-The following table lists the proxy and firewall configuration information required for the containerized agent to communicate with Container insights. All network traffic from the agent is outbound to Azure Monitor.
-
-**Azure public cloud**
-
-| Endpoint |Port |
-|--||
-| `*.ods.opinsights.azure.com` | 443 |
-| `*.oms.opinsights.azure.com` | 443 |
-| `dc.services.visualstudio.com` | 443 |
-| `*.monitoring.azure.com` | 443 |
-| `login.microsoftonline.com` | 443 |
-
-The following table lists the extra firewall configuration required for managed identity authentication.
-
-|Agent resource| Purpose | Port |
-|--|||
-| `global.handler.control.monitor.azure.com` | Access control service | 443 |
-| `<cluster-region-name>.ingest.monitor.azure.com` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
-| `<cluster-region-name>.handler.control.monitor.azure.com` | Fetch data collection rules for specific AKS cluster | 443 |
-
-**Microsoft Azure operated by 21Vianet cloud**
-
-The following table lists the proxy and firewall configuration information for Azure operated by 21Vianet.
-
-|Agent resource| Purpose | Port |
-|--||-|
-| `*.ods.opinsights.azure.cn` | Data ingestion | 443 |
-| `*.oms.opinsights.azure.cn` | OMS onboarding | 443 |
-| `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 |
-
-The following table lists the extra firewall configuration required for managed identity authentication.
-
-|Agent resource| Purpose | Port |
-|--|||
-| `global.handler.control.monitor.azure.cn` | Access control service | 443 |
-| `<cluster-region-name>.handler.control.monitor.azure.cn` | Fetch data collection rules for specific AKS cluster | 443 |
-
-**Azure Government cloud**
-
-The following table lists the proxy and firewall configuration information for Azure US Government.
-
-| Endpoint | Purpose | Port |
-|--||-|
-| `*.ods.opinsights.azure.us` | Data ingestion | 443 |
-| `*.oms.opinsights.azure.us` | OMS onboarding | 443 |
-| `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 |
-
-The following table lists the extra firewall configuration required for managed identity authentication.
-
-|Agent resource| Purpose | Port |
-|--|||
-| `global.handler.control.monitor.azure.us` | Access control service | 443 |
-| `<cluster-region-name>.handler.control.monitor.azure.us` | Fetch data collection rules for specific AKS cluster | 443 |
--
-## Troubleshooting
-If you registered your cluster and/or configured HCI Insights before November 2023, features that use the AMA agent on HCI, such as Arc for Servers Insights, VM Insights, Container Insights, Defender for Cloud or Sentinel might not be collecting logs and event data properly. See [Repair AMA agent for HCI](/azure-stack/hci/manage/monitor-hci-single?tabs=22h2-and-later) for steps to reconfigure the AMA agent and HCI Insights.
-
-## Next steps
-
-After you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on AKS, Azure Stack, or another environment.
-
-To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
--
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
- Title: Disable Container insights on your hybrid Kubernetes cluster
-description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights.
- Previously updated : 08/21/2023---
-# Disable Container insights on your hybrid Kubernetes cluster
-
-This article shows how to disable Container insights for the following Kubernetes environments:
--- AKS Engine on Azure and Azure Stack-- OpenShift version 4 and higher-- Azure Arc-enabled Kubernetes (preview)-
-## How to stop monitoring using Helm
-
-The following steps apply to the following environments:
--- AKS Engine on Azure and Azure Stack-- OpenShift version 4 and higher-
-1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command.
-
- ```
- helm list
- ```
-
- The output resembles the following:
-
- ```
- NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1
- ```
-
- *azmon-containers-release-1* represents the helm chart release for Container insights.
-
-2. To delete the chart release, run the following helm command.
-
- `helm delete <releaseName>`
-
- Example:
-
- `helm delete azmon-containers-release-1`
-
- This removes the release from the cluster. You can verify by running the `helm list` command:
-
- ```
- NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- ```
-
-The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`.
-
-## How to stop monitoring on Azure Arc-enabled Kubernetes
-
-### Using PowerShell
-
-1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
-
- ```powershell
- wget https://aka.ms/disable-monitoring-powershell-script -OutFile disable-monitoring.ps1
- ```
-
-2. Configure the `$azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
-
- ```powershell
- $azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
- ```
-
-3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`.
-
- ```powershell
- $kubeContext = "<kubeContext name of your k8s cluster>"
- ```
-
-4. Run the following command to stop monitoring the cluster.
-
- ```powershell
- .\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext
- ```
-
-#### Using service principal
-The script *disable-monitoring.ps1* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass $servicePrincipalClientId, $servicePrincipalClientSecret and $tenantId parameters with values of service principal you have intended to use to enable-monitoring.ps1 script.
-
-```powershell
-$subscriptionId = "<subscription Id of the Azure Arc-connected cluster resource>"
-$servicePrincipal = New-AzADServicePrincipal -Role Contributor -Scope "/subscriptions/$subscriptionId"
-
-$servicePrincipalClientId = $servicePrincipal.ApplicationId.ToString()
-$servicePrincipalClientSecret = [System.Net.NetworkCredential]::new("", $servicePrincipal.Secret).Password
-$tenantId = (Get-AzSubscription -SubscriptionId $subscriptionId).TenantId
-```
-
-For example:
-
-```powershell
-\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext -servicePrincipalClientId $servicePrincipalClientId -servicePrincipalClientSecret $servicePrincipalClientSecret -tenantId $tenantId
-```
--
-### Using bash
-
-1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
-
- ```bash
- curl -o disable-monitoring.sh -L https://aka.ms/disable-monitoring-bash-script
- ```
-
-2. Configure the `azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
-
- ```bash
- export AZUREARCCLUSTERRESOURCEID="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
- ```
-
-3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
-
- ```bash
- export KUBECONTEXT="<kubeContext name of your k8s cluster>"
- ```
-
-4. To stop monitoring your cluster, there are different commands provided based on your deployment scenario.
-
- Run the following command to stop monitoring the cluster using the current context.
-
- ```bash
- bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID
- ```
-
- Run the following command to stop monitoring the cluster by specifying a context
-
- ```bash
- bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT
- ```
-
-#### Using service principal
-The bash script *disable-monitoring.sh* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass --client-id, --client-secret and --tenant-id values of service principal you have intended to use to *enable-monitoring.sh* bash script.
-
-```bash
-SUBSCRIPTIONID="<subscription Id of the Azure Arc-connected cluster resource>"
-SERVICEPRINCIPAL=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTIONID}")
-SERVICEPRINCIPALCLIENTID=$(echo $SERVICEPRINCIPAL | jq -r '.appId')
-
-SERVICEPRINCIPALCLIENTSECRET=$(echo $SERVICEPRINCIPAL | jq -r '.password')
-TENANTID=$(echo $SERVICEPRINCIPAL | jq -r '.tenant')
-```
-
-For example:
-
-```bash
-bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT --client-id $SERVICEPRINCIPALCLIENTID --client-secret $SERVICEPRINCIPALCLIENTSECRET --tenant-id $TENANTID
-```
-
-## Next steps
-
-If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
- Title: Disable Container insights on your Azure Kubernetes Service (AKS) cluster
-description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights.
- Previously updated : 08/21/2023----
-# Disable Container insights on your Azure Kubernetes Service (AKS) cluster
-
-After you enable monitoring of your Azure Kubernetes Service (AKS) cluster, you can stop monitoring the cluster if you decide you no longer want to monitor it. This article shows you how to do this task by using the Azure CLI or the provided Azure Resource Manager templates (ARM templates).
-
-## Azure CLI
-
-Use the [az aks disable-addons](/cli/azure/aks#az-aks-disable-addons) command to disable Container insights. The command removes the agent from the cluster nodes. It doesn't remove the solution or the data already collected and stored in your Azure Monitor resource.
-
-```azurecli
-az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG
-```
-
-To reenable monitoring for your cluster, see [Enable monitoring by using the Azure CLI](container-insights-enable-new-cluster.md#enable-using-azure-cli).
-
-## Azure Resource Manager template
-
-Two ARM templates are provided to support removing the solution resources consistently and repeatedly in your resource group. One is a JSON template that specifies the configuration to stop monitoring. The other template contains parameter values that you configure to specify the AKS cluster resource ID and resource group in which the cluster is deployed.
-
-If you're unfamiliar with the concept of deploying resources by using a template, see:
-
-* [Deploy resources with ARM templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)
-* [Deploy resources with ARM templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)
-
->[!NOTE]
->The template must be deployed in the same resource group of the cluster. If you omit any other properties or add-ons when you use this template, they might be removed from the cluster. Examples are `enableRBAC` for Kubernetes RBAC policies implemented in your cluster, or `aksResourceTagValues`, if tags are specified for the AKS cluster.
->
-
-If you choose to use the Azure CLI, you must install and use the CLI locally. You must be running the Azure CLI version 2.0.27 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-### Create a template
-
-1. Copy and paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "aksResourceId": {
- "type": "string",
- "metadata": {
- "description": "AKS Cluster Resource ID"
- }
- },
- "aksResourceLocation": {
- "type": "string",
- "metadata": {
- "description": "Location of the AKS resource e.g. \"East US\""
- }
- },
- "aksResourceTagValues": {
- "type": "object",
- "metadata": {
- "description": "Existing all tags on AKS Cluster Resource"
- }
- }
- },
- "resources": [
- {
- "name": "[split(parameters('aksResourceId'),'/')[8]]",
- "type": "Microsoft.ContainerService/managedClusters",
- "location": "[parameters('aksResourceLocation')]",
- "tags": "[parameters('aksResourceTagValues')]",
- "apiVersion": "2018-03-31",
- "properties": {
- "mode": "Incremental",
- "id": "[parameters('aksResourceId')]",
- "addonProfiles": {
- "omsagent": {
- "enabled": false,
- "config": null
- }
- }
- }
- }
- ]
- }
- ```
-
-1. Save this file as **OptOutTemplate.json** to a local folder.
-
-1. Paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "aksResourceId": {
- "value": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroup>/providers/Microsoft.ContainerService/managedClusters/<ResourceName>"
- },
- "aksResourceLocation": {
- "value": "<aksClusterRegion>"
- },
- "aksResourceTagValues": {
- "value": {
- "<existing-tag-name1>": "<existing-tag-value1>",
- "<existing-tag-name2>": "<existing-tag-value2>",
- "<existing-tag-nameN>": "<existing-tag-valueN>"
- }
- }
- }
- }
- ```
-
-1. Edit the values for **aksResourceId** and **aksResourceLocation** by using the values of the AKS cluster, which you can find on the **Properties** page for the selected cluster.
- <!-- convertborder later -->
- :::image type="content" source="media/container-insights-optout/container-properties-page.png" lightbox="media/container-insights-optout/container-properties-page.png" alt-text="Screenshot that shows the Container properties page." border="false":::
-
- While you're on the **Properties** page, also copy the **Workspace Resource ID**. This value is required if you decide you want to delete the Log Analytics workspace later. Deleting the Log Analytics workspace isn't performed as part of this process.
-
- Edit the values for **aksResourceTagValues** to match the existing tag values specified for the AKS cluster.
-
-1. Save this file as **OptOutParam.json** to a local folder.
-
-Now you're ready to deploy this template.
-
-### Remove the solution by using the Azure CLI
-
-To remove the solution and clean up the configuration on your AKS cluster, run the following command with the Azure CLI on Linux:
-
-```azurecli
-az login
-az account set --subscription "Subscription Name"
-az deployment group create --resource-group <ResourceGroupName> --template-file ./OptOutTemplate.json --parameters ./OptOutParam.json
-```
-
-The configuration change can take a few minutes to finish. The result is returned in a message similar to the following example:
-
-```output
-ProvisioningState : Succeeded
-```
-
-### Remove the solution by using PowerShell
--
-To remove the solution and clean up the configuration from your AKS cluster, run the following PowerShell commands in the folder that contains the template:
-
-```powershell
-Connect-AzAccount
-Select-AzSubscription -SubscriptionName <yourSubscriptionName>
-New-AzResourceGroupDeployment -Name opt-out -ResourceGroupName <ResourceGroupName> -TemplateFile .\OptOutTemplate.json -TemplateParameterFile .\OptOutParam.json
-```
-
-The configuration change can take a few minutes to finish. The result is returned in a message similar to the following example:
-
-```output
-ProvisioningState : Succeeded
-```
-
-## Next steps
-
-If the workspace was created only to support monitoring the cluster and it's no longer needed, you must delete it manually. If you aren't familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace with the Azure portal](../logs/delete-workspace.md). Don't forget about the **Workspace Resource ID** copied earlier in step 4. You'll need that information.
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights in Azure Monitor description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. - Previously updated : 08/14/2023+ Last updated : 12/20/2023
-# Container insights overview
+# Overview of Container insights in Azure Monitor
-Container insights is a feature of Azure Monitor that monitors the performance and health of container workloads deployed to [Azure](../../aks/intro-kubernetes.md) or that are managed by [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). It collects memory and processor metrics from controllers, nodes, and containers in addition to gathering container logs. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and pre-built [workbooks](container-insights-reports.md).
+Container insights is a feature of Azure Monitor that collects and analyzes container logs from [Azure Kubernetes clusters](../../aks/intro-kubernetes.md) or [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and prebuilt [workbooks](container-insights-reports.md).
-The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
+Container insights works with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) for complete monitoring of your Kubernetes environment. It identifies all clusters across your subscriptions and allows you to quickly enable monitoring by both services.
-> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
-## Features of Container insights
-
-Container insights includes the following features to provide to understand the performance and health of your Kubernetes cluster and container workloads:
--- Identify resource bottlenecks by identifying containers running on each node and their processor and memory utilization.-- Identify processor and memory utilization of container groups and their containers hosted in container instances.-- View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod.-- Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod.-- Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.-- Access live container logs and metrics generated by the container engine to help with troubleshooting issues in real time.-- Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.
+> [!IMPORTANT]
+> Container insights collects metric data from your cluster in addition to logs. This functionality has been replaced by [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). You can analyze that data using built-in dashboards in [Managed Grafana](../../managed-grafan).
+>
+> You can continue to have Container insights collect metric data so you can use the Container insights monitoring experience. Or you can save cost by disabling this collection and using Grafana for metric analysis. See [Configure data collection in Container insights using data collection rule](container-insights-data-collection-dcr.md) for configuration options.
## Access Container insights
-Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
+Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular cluster from its page in the Azure portal.
:::image type="content" source="media/container-insights-overview/azmon-containers-experience.png" lightbox="media/container-insights-overview/azmon-containers-experience.png" alt-text="Screenshot that shows an overview of methods to access Container insights." border="false"::: ## Data collected
-Container insights sends data to [Logs](../logs/data-platform-logs.md) and [Metrics](../essentials/data-platform-metrics.md) where you can analyze it using different features of Azure Monitor. It works with other Azure services such as [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and [Managed Grafana](../../managed-grafan#monitoring-data).
+Container insights sends data to a [Log Analytics workspace](../logs/data-platform-logs.md) where you can analyze it using different features of Azure Monitor. This workspace is different than the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) used by Managed Prometheus. For more information on these other services, see [Monitoring data](../../aks/monitor-aks.md#monitoring-data).
## Supported configurations
-Container insights supports the following configurations:
--- [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).-- [Azure Container Instances](../../container-instances/container-instances-overview.md).-- Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises.-- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).-- AKS for ARM64 nodes.-
-Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI-compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
+Container insights supports the following environments:
+
+- [Azure Kubernetes Service (AKS)](../../aks/index.yml)
+- Following [Azure Arc-enabled Kubernetes cluster distributions](../../azure-arc/kubernetes/validation-program.md):
+ - AKS on Azure Stack HCI
+ - AKS Edge Essentials
+ - Canonical
+ - Cluster API Provider on Azure
+ - K8s on Azure Stack Edge
+ - Red Hat OpenShift version 4.x
+ - SUSE Rancher (Rancher Kubernetes engine)
+ - SUSE Rancher K3s
+ - VMware (TKG)
+
+> [!NOTE]
+> Container insights supports ARM64 nodes on AKS. See [Cluster requirements](../../azure-arc/kubernetes/system-requirements.md#cluster-requirements) for the details of Azure Arc-enabled clusters that support ARM64 nodes.
>[!NOTE] > Container insights support for Windows Server 2022 operating system is in public preview. +
+## Agent
+
+Container insights and Managed Prometheus rely on a containerized [Azure Monitor agent](../agents/agents-overview.md) for Linux. This specialized agent collects performance and event data from all nodes in the cluster. The agent is deployed and registered with the specified workspaces during deployment. When you enable Container insights on a cluster, a [Data collection rule (DCR)](../essentials/data-collection-rule-overview.md) is created with the name `MSCI-<cluster-region>-<cluster-name>` that contains the definition of data that should be collected by Azure Monitor agent.
+
+Since March 1, 2023 Container insights uses a semver compliant agent version. The agent version is *mcr.microsoft.com/azuremonitor/containerinsights/ciprod:3.1.4* or later. When a new version of the agent is released, it will be automatically upgraded on your managed Kubernetes clusters that are hosted on AKS. To track which versions are released, see [Agent release announcements](https://github.com/microsoft/Docker-Provider/blob/ci_prod/ReleaseNotes.md).
++
+### Log Analytics agent
+
+When Container insights doesn't use managed identity authentication, it relies on a containerized [Log Analytics agent for Linux](../agents/log-analytics-agent.md). The agent version is *microsoft/oms:ciprod04202018* or later. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on AKS. To track which versions are released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+
+With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemon set pod on each individual Windows Server node to collect logs and forward them to Log Analytics. For performance metrics, a Linux node is automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor for all Windows nodes in the cluster.
++ ## Frequently asked questions This section provides answers to common questions.
-### Is there support for collecting Kubernetes audit logs for ARO clusters?
-
+**Is there support for collecting Kubernetes audit logs for ARO clusters?**
No. Container insights don't support collection of Kubernetes audit logs.
-### Does Container Insights support pod sandboxing?
-
-Yes, Container Insights supports pod sandboxing through support for Kata Containers. For more details on pod sandboxing in AKS, [refer to the AKS docs](/azure/aks/use-pod-sandboxing).
+**Does Container Insights support pod sandboxing?**
+Yes, Container Insights supports pod sandboxing through support for Kata Containers. See [Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)](../../aks/use-pod-sandboxing.md).
## Next steps
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Container insights automatically starts monitoring PV usage by collecting the fo
|--|--|-| | `pvUsedBytes`| `podUID`, `podName`, `pvcName`, `pvcNamespace`, `capacityBytes`, `clusterId`, `clusterName`| Used space in bytes for a specific persistent volume with a claim used by a specific pod. The `capacityBytes` tag is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
-To learn more about how to configure collected PV metrics, see [Configure agent data collection for Container insights](./container-insights-agent-config.md).
+To learn more about how to configure collected PV metrics, see [Configure agent data collection for Container insights](./container-insights-data-collection-configmap.md).
## PV inventory
You can enable a recommended alert to alert you when average PV usage for a pod
## Next steps
-To learn more about collected PV metrics, see [Configure agent data collection for Container insights](./container-insights-agent-config.md).
+To learn more about collected PV metrics, see [Configure agent data collection for Container insights](./container-insights-data-collection-configmap.md).
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
The following table summarizes known errors you might encounter when you use Con
| Error messages | Action | | - | | | Error message "No data for selected filters" | It might take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster.<br><br>If data still doesn't show up, check if the Log Analytics workspace is configured for `disableLocalAuth = true`. If yes, update back to `disableLocalAuth = false`.<br><br>`az resource show --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]"`<br><br>`az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]" --api-version "2021-06-01" --set properties.features.disableLocalAuth=False` |
-| Error message "Error retrieving data" | While an AKS cluster is setting up for health and performance monitoring, a connection is established between the cluster and a Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error might occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, reenable monitoring of your cluster with Container insights. Then specify an existing workspace or create a new one. To reenable, [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
+| Error message "Error retrieving data" | While an AKS cluster is setting up for health and performance monitoring, a connection is established between the cluster and a Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error might occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, reenable monitoring of your cluster with Container insights. Then specify an existing workspace or create a new one. To reenable, [disable](kubernetes-monitoring-disable.md) monitoring for the cluster and [enable](kubernetes-monitoring-enable.md) Container insights again. |
| "Error retrieving data" after adding Container insights through `az aks cli` | When you enable monitoring by using `az aks cli`, Container insights might not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Legacy solutions** from the pane on the left side. To resolve this issue, redeploy the solution. Follow the instructions in [Enable Container insights](container-insights-onboard.md). | | Error message "Missing Subscription registration" | If you receive the error "Missing Subscription registration for Microsoft.OperationsManagement," you can resolve it by registering the resource provider **Microsoft.OperationsManagement** in the subscription where the workspace is defined. For the steps, see [Resolve errors for resource provider registration](../../azure-resource-manager/templates/error-register-resource-provider.md). | | Error message "The reply url specified in the request doesn't match the reply urls configured for the application: '<application ID\>'." | You might see this error message when you enable live logs. For the solution, see [View container data in real time with Container insights](./container-insights-livedata-setup.md#configure-azure-ad-integrated-authentication). |
ContainerLog
Reenable collection for these properties for every container log line.
-If the first option isn't convenient because of query changes involved, you can reenable collecting these fields. Enable the setting `log_collection_settings.enrich_container_logs` in the agent config map as described in the [data collection configuration settings](./container-insights-agent-config.md).
+If the first option isn't convenient because of query changes involved, you can reenable collecting these fields. Enable the setting `log_collection_settings.enrich_container_logs` in the agent config map as described in the [data collection configuration settings](./container-insights-data-collection-configmap.md).
> [!NOTE] > We don't recommend the second option for large clusters that have more than 50 nodes. It generates API server calls from every node in the cluster to perform this enrichment. This option also increases data size for every log line collected.
If the first option isn't convenient because of query changes involved, you can
Here's the scenario: You enabled Container insights for an Azure Kubernetes Service cluster. Then you deleted the Log Analytics workspace where the cluster was sending its data. Now when you attempt to upgrade the cluster, it fails. To work around this issue, you must disable monitoring and then reenable it by referencing a different valid workspace in your subscription. When you try to perform the cluster upgrade again, it should process and complete successfully. +
+## Not collecting logs on Azure Stack HCI cluster
+If you registered your cluster and/or configured HCI Insights before November 2023, features that use the Azure Monitor agent on HCI, such as Arc for Servers Insights, VM Insights, Container Insights, Defender for Cloud, or Microsoft Sentinel might not be collecting logs and event data properly. See [Repair AMA agent for HCI](/azure-stack/hci/manage/monitor-hci-single?tabs=22h2-and-later#repair-ama-for-azure-stack-hci) for steps to reconfigure the agent and HCI Insights.
++ ## Next steps When monitoring is enabled to capture health metrics for the AKS cluster nodes and pods, these health metrics are available in the Azure portal. To learn how to use Container insights, see [View Azure Kubernetes Service health](container-insights-analyze.md).
azure-monitor Container Insights V2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-v2-migration.md
- Title: Migrate from ContainerLog to ContainerLogV2
-description: This article describes the transition plan from the ContainerLog to ContainerLogV2 table
- Previously updated : 07/19/2023---
-# Migrate from ContainerLog to ContainerLogV2
-
-With the upgraded offering of ContainerLogV2 becoming generally available, on 30th September 2026, the ContainerLog table will be retired. If you currently ingest container insights data to the ContainerLog table, transition to using ContainerLogV2 prior to that date.
-
->[!NOTE]
-> Support for ingesting the ContainerLog table will be **retired on 30th September 2026**.
-
-## Steps to complete the transition
-
-To transition to ContainerLogV2, we recommend the following approach.
-
-1. Learn about the feature differences between ContainerLog and ContainerLogV2
-2. Assess the impact migrating to ContainerLogV2 might have on your existing queries, alerts, or dashboards
-3. [Enable the ContainerLogV2 schema](container-insights-logging-v2.md) through either the container insights data collection rules (DCRs) or ConfigMap
-4. Validate that you're now ingesting ContainerLogV2 to your Log Analytics workspace.
-
-## ContainerLog vs ContainerLogV2 schema
-
-The following table highlights the key differences between using ContainerLog and ContainerLogV2 schema.
-
->[!NOTE]
-> DCR based configuration is not supported for service principal based clusters. [Migrate your clusters with service principal to managed identity](./container-insights-authentication.md) to use this experience.
-
-| Feature differences | ContainerLog | ContainerLogV2 |
-| - | -- | - |
-| Onboarding | Only configurable through the ConfigMap | Configurable through both the ConfigMap and DCR\* |
-| Pricing | Only compatible with full-priced analytics logs | Supports the low cost basic logs tier in addition to analytics logs |
-| Querying | Requires multiple join operations with inventory tables for standard queries | Includes additional pod and container metadata to reduce query complexity and join operations |
-| Multiline | Not supported, multiline entries are split into multiple rows | Support for multiline logging to allow consolidated, single entries for multiline output |
-
-\* DCR enablement is not supported for service principal based clusters, must be enabled through the ConfigMap
-
-## Assess the impact on existing alerts
-
-If you're currently using ContainerLog in your alerts, then migrating to ContainerLogV2 requires updates to your alert queries for them to continue functioning as expected.
-
-To scan for alerts that might be referencing the ContainerLog table, run the following Azure Resource Graph query:
-
-```Kusto
-resources
-| where type in~ ('microsoft.insights/scheduledqueryrules') and ['kind'] !in~ ('LogToMetric')
-| extend severity = strcat("Sev", properties["severity"])
-| extend enabled = tobool(properties["enabled"])
-| where enabled in~ ('true')
-| where tolower(properties["targetResourceTypes"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["targetResourceType"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["scopes"]) matches regex 'providers/microsoft.operationalinsights/workspaces($|/.*)?'
-| where properties contains "ContainerLog"
-| project id,name,type,properties,enabled,severity,subscriptionId
-| order by tolower(name) asc
-```
-
-## Next steps
-- [Enable ContainerLogV2](container-insights-logging-v2.md)
azure-monitor Kubernetes Monitoring Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-disable.md
+
+ Title: Disable monitoring of your Kubernetes cluster
+description: Describes how to remove Container insights and scraping of Prometheus metrics from your Kubernetes cluster.
+ Last updated : 12/14/2023+
+ms.devlang: azurecli
+++
+# Disable monitoring of your Kubernetes cluster
+
+This article shows you how to stop monitoring your Kubernetes cluster and Remove Container insights.
++
+## AKS cluster
+
+### [CLI](#tab/cli)
+
+#### Prometheus
+Currently, the Azure CLI is the only option to remove the metrics add-on from your AKS cluster, and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
+
+The `az aks update --disable-azure-monitor-metrics` command:
+++ Removes the ama-metrics agent from the cluster nodes. ++ Deletes the recording rules created for that cluster. ++ Deletes the data collection endpoint (DCE). ++ Deletes the data collection rule (DCR).++ Deletes the DCRA and recording rules groups created as part of onboarding.+
+> [!NOTE]
+> This action doesn't remove any existing data stored in your Azure Monitor workspace.
+
+```azurecli
+az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+#### Container insights
+Use the [az aks disable-addons](/cli/azure/aks#az-aks-disable-addons) command to disable Container insights. The command removes the agent from the cluster nodes. It doesn't remove the solution or the data already collected and stored in your Azure Monitor resource.
+
+```azurecli
+az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG
+```
+
+### [Azure Resource Manager](#tab/arm)
++
+#### Download and install template
+
+1. Create the template file by saving the following JSON syntax as *OptOutTemplate.json*.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "aksResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "AKS Cluster Resource ID"
+ }
+ },
+ "aksResourceLocation": {
+ "type": "string",
+ "metadata": {
+ "description": "Location of the AKS resource e.g. \"East US\""
+ }
+ },
+ "aksResourceTagValues": {
+ "type": "object",
+ "metadata": {
+ "description": "Existing all tags on AKS Cluster Resource"
+ }
+ }
+ },
+ "resources": [
+ {
+ "name": "[split(parameters('aksResourceId'),'/')[8]]",
+ "type": "Microsoft.ContainerService/managedClusters",
+ "location": "[parameters('aksResourceLocation')]",
+ "tags": "[parameters('aksResourceTagValues')]",
+ "apiVersion": "2018-03-31",
+ "properties": {
+ "mode": "Incremental",
+ "id": "[parameters('aksResourceId')]",
+ "addonProfiles": {
+ "omsagent": {
+ "enabled": false,
+ "config": null
+ }
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+2. Create the template file by saving the following JSON syntax as *OptOutParam.json*.
+
+3. Edit the following values in the parameter file. Retrieve the resource ID of the resources from the **JSON View** of their **Overview** page.
+
+ | Parameter | Description |
+ |:|:|
+ | `aksResourceId` | Resource ID of the cluster. |
+ | `aksResourceLocation` | Location of the cluster. |
+ | `resourceTagValues` | Existing tag values specified for the Container insights cluster. |
++
+4. Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
+++
+## Arc-enabled Kubernetes cluster
+
+The following scripts are available for removing Container insights from your Arc-enabled Kubernetes clusters. You can get the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, then don't specify this parameter.
++
+PowerShell: [disable-monitoring.ps1](https://aka.ms/disable-monitoring-powershell-script)
+Bash: [disable-monitoring.sh](https://aka.ms/disable-monitoring-bash-script)
++
+```powershell
+# Use current context
+.\disable-monitoring.ps1 -clusterResourceId <cluster-resource-id>
+
+# Specify kube-context
+.\disable-monitoring.ps1 -clusterResourceId <cluster-resource-id> -kubeContext <kube-context>
+```
+
+```bash
+# Use current context
+bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID
+
+# Specify kube-context
+bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT
+```
+
+## Disable monitoring using Helm
+
+The following steps apply to the following environments:
+
+- AKS Engine on Azure and Azure Stack
+- OpenShift version 4 and higher
+
+1. Run the following helm command to identify the Container insights helm chart release installed on your cluster
+
+ ```
+ helm list
+ ```
+
+ The output resembles the following:
+
+ ```
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1
+ ```
+
+ *azmon-containers-release-1* represents the helm chart release for Container insights.
+
+2. To delete the chart release, run the following helm command.
+
+ `helm delete <releaseName>`
+
+ Example:
+
+ `helm delete azmon-containers-release-1`
+
+ This removes the release from the cluster. You can verify by running the `helm list` command:
+
+ ```
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ ```
+
+The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`.
+
+## Next steps
+
+If the workspace was created only to support monitoring the cluster and it's no longer needed, you must delete it manually. If you aren't familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace with the Azure portal](../logs/delete-workspace.md). Don't forget about the **Workspace Resource ID** copied earlier in step 4. You'll need that information.
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
+
+ Title: Enable monitoring for Azure Kubernetes Service (AKS) cluster
+description: Learn how to enable Container insights and Managed Prometheus on an Azure Kubernetes Service (AKS) cluster.
+ Last updated : 11/14/2023++++
+# Enable monitoring for Kubernetes clusters
+
+This article describes how to enable complete monitoring of your Kubernetes clusters using the following Azure Monitor features:
+
+- [Managed Prometheus](../essentials/prometheus-metrics-overview.md) for metric collection
+- [Container insights](./container-insights-overview.md) for log collection
+- [Managed Grafana](../../managed-grafan) for visualization.
+
+[Using the Azure portal](#enable-full-monitoring-with-azure-portal), you can enable all of the features at the same time. You can also enable them individually by using the Azure CLI, Azure Resource Manager template, Terraform, or Azure Policy. Each of these methods is described in this article.
+
+> [!IMPORTANT]
+> This article describes onboarding using default configuration settings including managed identity authentication. See [Configure agent data collection for Container insights](container-insights-data-collection-configmap.md) and [Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](prometheus-metrics-scrape-configuration.md) to customize your configuration to ensure that you aren't collecting more data than you require. See [Authentication for Container Insights](container-insights-authentication.md) for guidance on migrating from legacy authentication models.
+
+## Supported clusters
+
+This article provides onboarding guidance for the following types of clusters. Any differences in the process for each type are noted in the relevant sections.
+
+- [Azure Kubernetes clusters (AKS)](../../aks/intro-kubernetes.md)
+- [Arc-enabled Kubernetes clusters](../../azure-arc/kubernetes/overview.md)
+- [AKS hybrid clusters (preview)](/azure/aks/hybrid/aks-hybrid-options-overview)
+
+## Prerequisites
+
+**Permissions**
+
+- You require at least [Contributor](../../role-based-access-control/built-in-roles.md#contributor) access to the cluster for onboarding.
+- You require [Monitoring Reader](../roles-permissions-security.md#monitoring-reader) or [Monitoring Contributor](../roles-permissions-security.md#monitoring-contributor) to view data after monitoring is enabled.
+
+**Managed Prometheus prerequisites**
+
+ - The cluster must use [managed identity authentication](../../aks/use-managed-identity.md).
+ - The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor workspace:
+ - Microsoft.ContainerService
+ - Microsoft.Insights
+ - Microsoft.AlertsManagement
+
+**Arc-Enabled Kubernetes clusters prerequisites**
+
+ - Prerequisites for [Azure Arc-enabled Kubernetes cluster extensions](../../azure-arc/kubernetes/extensions.md#prerequisites).
+ - Verify the [firewall requirements](kubernetes-monitoring-firewall.md) in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md).
+ - If you previously installed monitoring for AKS, ensure that you have [disabled monitoring](kubernetes-monitoring-disable.md) before proceeding to avoid issues during the extension install.
+ - If you previously installed monitoring on a cluster using a script without cluster extensions, follow the instructions at [Disable Container insights on your hybrid Kubernetes cluster](container-insights-optout-hybrid.md) to delete this Helm chart.
+++
+## Workspaces
+
+The following table describes the workspaces that are required to support Managed Prometheus and Container insights. You can create each workspace as part of the onboarding process or use an existing workspace. See [Design a Log Analytics workspace architecture](../logs/workspace-design.md) for guidance on how many workspaces to create and where they should be placed.
+
+| Feature | Workspace | Notes |
+|:|:|:|
+| Managed Prometheus | [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) | `Contributor` permission is enough for enabling the addon to send data to the Azure Monitor workspace. You will need `Owner` level permission to link your Azure Monitor Workspace to view metrics in Azure Managed Grafana. This is required because the user executing the onboarding step, needs to be able to give the Azure Managed Grafana System Identity `Monitoring Reader` role on the Azure Monitor Workspace to query the metrics. |
+| Container insights | [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) | You can attach an AKS cluster to a Log Analytics workspace in a different Azure subscription in the same Microsoft Entra tenant, but you must use the Azure CLI or an Azure Resource Manager template. You can't currently perform this configuration with the Azure portal.<br><br>If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the *Microsoft.ContainerService* resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).<br><br>For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md). |
+| Managed Grafana | [Azure Managed Grafana workspace](../../managed-grafan) to make the Prometheus metrics collected from your cluster available to Grafana dashboards. |
++
+## Enable Prometheus and Grafana
+Use one of the following methods to enable scraping of Prometheus metrics from your cluster and enable Managed Grafana to visualize the metrics. See [Link a Grafana workspace](../../managed-grafan) for options to connect your Azure Monitor workspace and Azure Managed Grafana workspace.
+
+### [CLI](#tab/cli)
+
+If you don't specify an existing Azure Monitor workspace in the following commands, the default workspace for the resource group will be used. If a default workspace doesn't already exist in the cluster's region, one with a name in the format `DefaultAzureMonitorWorkspace-<mapped_region>` will be created in a resource group with the name `DefaultRG-<cluster_region>`.
+
+#### Prerequisites
+
+- Az CLI version of 2.49.0 or higher is required.
+- The aks-preview extension must be [uninstalled from AKS clusters](/cli/azure/azure-cli-extensions-overview) by using the command `az extension remove --name aks-preview`.
+- The k8s-extension extension must be installed using the command `az extension add --name k8s-extension`.
+- The k8s-extension version 1.4.1 or higher is required.
+
+#### AKS cluster
+Use the `-enable-azure-monitor-metrics` option `az aks create` or `az aks update` (depending whether you're creating a new cluster or updating an existing cluster) to install the metrics add-on that scrapes Prometheus metrics.
++
+**Sample commands**
+
+```azurecli
+### Use default Azure Monitor workspace
+az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+
+### Use existing Azure Monitor workspace
+az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
+
+### Use an existing Azure Monitor workspace and link with an existing Grafana workspace
+az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
+
+### Use optional parameters
+az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]"
+```
+
+#### Arc-enabled cluster
++
+```azurecli
+### Use default Azure Monitor workspace
+az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics
+
+## Use existing Azure Monitor workspace
+az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics --configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id>
+
+### Use an existing Azure Monitor workspace and link with an existing Grafana workspace
+az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics --configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> grafana-resource-id=<grafana-workspace-name-resource-id>
+
+### Use optional parameters
+az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics --configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> grafana-resource-id=<grafana-workspace-name-resource-id> AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList="pods=[k8s-annotation-1,k8s-annotation-n]" AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist "namespaces=[k8s-label-1,k8s-label-n]"
+```
+
+Any of the commands can use the following optional parameters:
+
+- AKS: `--ksm-metric-annotations-allow-list`<br>Arc: `--AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList`<br>Comma-separated list of Kubernetes annotations keys used in the resource's kube_resource_annotations metric. For example, kube_pod_annotations is the annotations metric for the pods resource. By default, this metric contains only name and namespace labels. To include more annotations, provide a list of resource names in their plural form and Kubernetes annotation keys that you want to allow for them. A single `*` can be provided for each resource to allow any annotations, but this has severe performance implications. For example, `pods=[kubernetes.io/team,...],namespaces=[kubernetes.io/team],...`.<br>
+- AKS: `--ksm-metric-labels-allow-list`<br>Arc: `--AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist`<br>Comma-separated list of more Kubernetes label keys that is used in the resource's kube_resource_labels metric kube_resource_labels metric. For example, kube_pod_labels is the labels metric for the pods resource. By default this metric contains only name and namespace labels. To include more labels, provide a list of resource names in their plural form and Kubernetes label keys that you want to allow for them A single `*` can be provided for each resource to allow any labels, but i this has severe performance implications. For example, `pods=[app],namespaces=[k8s-label-1,k8s-label-n,...],...`.<br>
+- AKS: `--enable-windows-recording-rules` Lets you enable the recording rule groups required for proper functioning of the Windows dashboards.
+++
+### [Azure Resource Manager](#tab/arm)
+
+Both ARM and Bicep templates are provided in this section.
+
+#### Prerequisites
+
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
+- The template must be deployed in the same resource group as the Azure Managed Grafana instance.
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor workspace subscription, register the Azure Monitor workspace subscription with the `Microsoft.Dashboard` resource provider using the guidance at [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Reader` role directly by deploying the template.
+
+> [!NOTE]
+> Currently in Bicep, there's no way to explicitly scope the `Monitoring Reader` role assignment on a string parameter "resource ID" for an Azure Monitor workspace like in an ARM template. Bicep expects a value of type `resource | tenant`. There is also no REST API [spec](https://github.com/Azure/azure-rest-api-specs) for an Azure Monitor workspace.
+>
+> Therefore, the default scoping for the `Monitoring Reader` role is on the resource group. The role is applied on the same Azure Monitor workspace by inheritance, which is the expected behavior. After you deploy this Bicep template, the Grafana instance is given `Monitoring Reader` permissions for all the Azure Monitor workspaces in that resource group.
++
+#### Retrieve required values for Grafana resource
+If the Azure Managed Grafana instance is already linked to an Azure Monitor workspace, then you must include this list in the template. On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**, and copy the value of `azureMonitorWorkspaceIntegrations` which will look similar to the sample below. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+
+```json
+"properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ ]
+ }
+}
+```
+
+#### Download and edit template and parameter file
+
+1. Download the required files for the type of Kubernetes cluster you're working with.
+
+ **AKS cluster ARM**
+
+ - Template file: [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template)
+ - Parameter file: [https://aka.ms/azureprometheus-enable-arm-template-parameters](https://aka.ms/azureprometheus-enable-arm-template-parameters)
+
+ **AKS cluster Bicep**
+
+ - Template file: [https://aka.ms/azureprometheus-enable-bicep-template](https://aka.ms/azureprometheus-enable-bicep-template)
+ - Parameter file: [https://aka.ms/azureprometheus-enable-bicep-template-parameters](https://aka.ms/azureprometheus-enable-arm-template-parameters)
+ - DCRA module: [https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId)
+ - Profile module: [https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId)
+
+ **Arc-Enabled cluster ARM**
+
+ - Template file: [https://aka.ms/azureprometheus-arc-arm-template](https://aka.ms/azureprometheus-arc-arm-template)
+ - Parameter file: [https://aka.ms/azureprometheus-arc-arm-template-parameters](https://aka.ms/azureprometheus-arc-arm-template-parameters)
+++
+2. Edit the following values in the parameter file. The same set of values are used for both the ARM and Bicep templates. Retrieve the resource ID of the resources from the **JSON View** of their **Overview** page.
++
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys to be used in the resource's annotations metric. |
+ | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
+++
+3. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will look similar to the following samples. In these samples, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
+
+ **ARM**
+
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ }
+ ]
+ }
+ }
+ }
+ ```
++
+ **Bicep**
+
+
+
+ ```bicep
+ resource grafanaResourceId_8 'Microsoft.Dashboard/grafana@2022-08-01' = {
+ name: split(grafanaResourceId, '/')[8]
+ sku: {
+ name: grafanaSku
+ }
+ identity: {
+ type: 'SystemAssigned'
+ }
+ location: grafanaLocation
+ properties: {
+ grafanaIntegrations: {
+ azureMonitorWorkspaceIntegrations: [
+ {
+ azureMonitorWorkspaceResourceId: 'full_resource_id_1'
+ }
+ {
+ azureMonitorWorkspaceResourceId: 'full_resource_id_2'
+ }
+ {
+ azureMonitorWorkspaceResourceId: azureMonitorWorkspaceResourceId
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+4. Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
++
+### [Terraform](#tab/terraform)
+
+#### Prerequisites
+
+- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
+- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.
+- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Reader role directly by deploying the template.
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+
+#### Retrieve required values for a Grafana resource
+
+On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the `azure_monitor_workspace_integrations` block in `main.tf` with the list of grafana integrations.
+
+```.tf
+ azure_monitor_workspace_integrations {
+ resource_id = var.monitor_workspace_id[var.monitor_workspace_id1, var.monitor_workspace_id2]
+ }
+```
+
+#### Download and edit the templates
+
+If you're deploying a new AKS cluster using Terraform with managed Prometheus addon enabled, follow these steps:
+
+1. Download all files under [AddonTerraformTemplate](https://aka.ms/AAkm357).
+2. Edit the variables in variables.tf file with the correct parameter values.
+3. Run `terraform init -upgrade` to initialize the Terraform deployment.
+4. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
+5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
++
+Note: Pass the variables for `annotations_allowed` and `labels_allowed` keys in main.tf only when those values exist. These are optional blocks.
+
+> [!NOTE]
+> Edit the main.tf file appropriately before running the terraform template. Add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template. Else, older values gets deleted and replaced with what is there in the template during deployment. Users with 'User Access Administrator' role in the subscription of the AKS cluster can enable 'Monitoring Reader' role directly by deploying the template. Edit the grafanaSku parameter if you're using a nonstandard SKU and finally run this template in the Grafana Resource's resource group.
+
+### [Azure Policy](#tab/policy)
+
+1. Download Azure Policy template and parameter files.
+
+ - Template file: [https://aka.ms/AddonPolicyMetricsProfile](https://aka.ms/AddonPolicyMetricsProfile)
+ - Parameter file: [https://aka.ms/AddonPolicyMetricsProfile.parameters](https://aka.ms/AddonPolicyMetricsProfile.parameters)
+
+1. Create the policy definition using the following CLI command:
+
+ `az policy definition create --name "Prometheus Metrics addon" --display-name "Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules AddonPolicyMetricsProfile.rules.json --params AddonPolicyMetricsProfile.parameters.json`
+
+1. After you create the policy definition, in the Azure portal, select **Policy** and then **Definitions**. Select the policy definition you created.
+1. Select **Assign** and fill in the details on the **Parameters** tab. Select **Review + Create**.
+1. If you want to apply the policy to an existing cluster, create a **Remediation task** for that cluster resource from **Policy Assignment**.
+
+After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring.
+++++
+## Enable Container insights
+Use one of the following methods to enable Container insights on your cluster. Once this is complete, see [Configure agent data collection for Container insights](container-insights-data-collection-configmap.md) to customize your configuration to ensure that you aren't collecting more data than you require.
++
+### [CLI](#tab/cli)
+
+Use one of the following commands to enable monitoring of your AKS and Arc-enabled clusters. If you don't specify an existing Log Analytics workspace, the default workspace for the resource group will be used. If a default workspace doesn't already exist in the cluster's region, one will be created with a name in the format `DefaultWorkspace-<GUID>-<Region>`.
+
+#### Prerequisites
+
+- Azure CLI version 2.43.0 or higher
+- Managed identity authentication is default in CLI version 2.49.0 or higher.
+- Azure k8s-extension version 1.3.7 or higher
+- Managed identity authentication is the default in k8s-extension version 1.43.0 or higher.
+- Managed identity authentication is not supported for Arc-enabled Kubernetes clusters with ARO (Azure Red Hat Openshift) or Windows nodes. Use legacy authentication.
+- For CLI version 2.54.0 or higher, the logging schema will be configured to [ContainerLogV2](container-insights-logs-schema.md) using [ConfigMap](container-insights-data-collection-configmap.md).
+
+#### AKS cluster
+
+```azurecli
+### Use default Log Analytics workspace
+az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name>
+
+### Use existing Log Analytics workspace
+az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id>
+```
+
+**Example**
+
+```azurecli
+az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"
+```
++
+#### Arc-enabled cluster
+
+```azurecli
+### Use default Log Analytics workspace
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers
+
+### Use existing Log Analytics workspace
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings logAnalyticsWorkspaceResourceID=<workspace-resource-id>
+
+### Use managed identity authentication (default as k8s-extension version 1.43.0)
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
+
+### Use advanced configuration settings
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.resources.daemonset.limits.cpu=150m amalogs.resources.daemonset.limits.memory=600Mi amalogs.resources.deployment.limits.cpu=1 amalogs.resources.deployment.limits.memory=750Mi
+
+### On Azure Stack Edge
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=/home/data/docker
+
+```
++
+**Example**
+
+```azurecli
+az aks enable-addons -a monitoring -n my-cluster -g my-resource-group --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"
+```
++
+See the [resource requests and limits section of Helm chart](https://github.com/microsoft/Docker-Provider/blob/ci_prod/charts/azuremonitor-containers/values.yaml) for the available configuration settings.
+
+If the cluster is configured with a forward proxy, then proxy settings are automatically applied to the extension. In the case of a cluster with AMPLS + proxy, proxy config should be ignored. Onboard the extension with the configuration setting `amalogs.ignoreExtensionProxySettings=true`.
+
+```azurecli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.ignoreExtensionProxySettings=true
+```
+
+**Delete extension instance**
+
+The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data in the Log Analytics resource is left intact.
+
+```azurecli
+az k8s-extension delete --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group>
+```
+
+#### AKS hybrid cluster
++
+```azurecli
+### Use default Log Analytics workspace
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
+
+### Use existing Log Analytics workspace
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true --configuration-settings logAnalyticsWorkspaceResourceID=<workspace-resource-id>
+
+```
+
+See the [resource requests and limits section of Helm chart](https://github.com/microsoft/Docker-Provider/blob/ci_prod/charts/azuremonitor-containers/values.yaml) for the available configuration settings.
+
+**Example**
+
+```azurecli
+az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"
+```
+
+**Delete extension instance**
+
+The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data in the Log Analytics resource is left intact.
+
+```azurecli
+az k8s-extension delete --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --name azuremonitor-containers --yes
+```
+
+### [Azure Resource Manager](#tab/arm)
+
+Both ARM and Bicep templates are provided in this section.
+
+#### Prerequisites
+
+- The template must be deployed in the same resource group as the cluster.
+
+#### Download and install template
+
+1. Download and edit template and parameter file
+
+ **AKS cluster ARM**
+ - Template file: [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file)
+ - Parameter file: [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file)
+
+ **AKS cluster Bicep**
+ - Template file (Syslog): [https://aka.ms/enable-monitoring-msi-syslog-bicep-template](https://aka.ms/enable-monitoring-msi-syslog-bicep-template)
+ - Parameter file (No Syslog): [https://aka.ms/enable-monitoring-msi-syslog-bicep-parameters](https://aka.ms/enable-monitoring-msi-syslog-bicep-parameters)
+ - Template file (No Syslog): [https://aka.ms/enable-monitoring-msi-bicep-template](https://aka.ms/enable-monitoring-msi-bicep-template)
+ - Parameter file (No Syslog): [https://aka.ms/enable-monitoring-msi-bicep-parameters](https://aka.ms/enable-monitoring-msi-bicep-parameters)
+
+ **Arc-enabled cluster ARM**
+ - Template file: [https://aka.ms/arc-k8s-azmon-extension-msi-arm-template](https://aka.ms/arc-k8s-azmon-extension-msi-arm-template)
+ - Parameter file: [https://aka.ms/arc-k8s-azmon-extension-msi-arm-template-params](https://aka.ms/arc-k8s-azmon-extension-msi-arm-template-params)
+ - Template file (legacy authentication): [https://aka.ms/arc-k8s-azmon-extension-arm-template](https://aka.ms/arc-k8s-azmon-extension-arm-template)
+ - Parameter file (legacy authentication): [https://aka.ms/arc-k8s-azmon-extension-arm-template-params](https://aka.ms/arc-k8s-azmon-extension-arm-template-params)
+
+2. Edit the following values in the parameter file. The same set of values are used for both the ARM and Bicep templates. Retrieve the resource ID of the resources from the **JSON View** of their **Overview** page.
+
+ | Parameter | Description |
+ |:|:|
+ | AKS: `aksResourceId`<br>Arc: `clusterResourceId` | Resource ID of the cluster. |
+ | AKS: `aksResourceLocation`<br>Arc: `clusterRegion` | Location of the cluster. |
+ | AKS: `workspaceResourceId`<br>Arc: `workspaceResourceId` | Resource ID of the Log Analytics workspace. |
+ | Arc: `workspaceRegion` | Region of the Log Analytics workspace. |
+ | Arc: `workspaceDomain` | Domain of the Log Analytics workspace.<br>`opinsights.azure.com` for Azure public cloud<br>`opinsights.azure.us` for AzureUSGovernment. |
+ | AKS: `resourceTagValues` | Tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be `MSCI-<clusterName>-<clusterRegion>` and this resource created in an AKS clusters resource group. For first time onboarding, you can set arbitrary tag values. |
++
+3. Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
+++
+### [Terraform](#tab/terraform)
+
+#### New AKS cluster
+
+1. Download Terraform template file depending on whether you want to enable Syslog collection.
+
+ **Syslog**
+ - [https://aka.ms/enable-monitoring-msi-syslog-terraform](https://aka.ms/enable-monitoring-msi-syslog-terraform)
+
+ **No Syslog**
+ - [https://aka.ms/enable-monitoring-msi-terraform](https://aka.ms/enable-monitoring-msi-terraform)
+
+2. Adjust the `azurerm_kubernetes_cluster` resource in *main.tf* based on your cluster settings.
+3. Update parameters in *variables.tf* to replace values in "<>"
+
+ | Parameter | Description |
+ |:|:|
+ | `aks_resource_group_name` | Use the values on the AKS Overview page for the resource group. |
+ | `resource_group_location` | Use the values on the AKS Overview page for the resource group. |
+ | `cluster_name` | Define the cluster name that you would like to create. |
+ | `workspace_resource_id` | Use the resource ID of your Log Analytics workspace. |
+ | `workspace_region` | Use the location of your Log Analytics workspace. |
+ | `resource_tag_values` | Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values. |
+ | `enabledContainerLogV2` | Set this parameter value to be true to use the default recommended ContainerLogV2. |
+ | Cost optimization parameters | Refer to [Data collection parameters](container-insights-cost-config.md#data-collection-parameters) |
++
+4. Run `terraform init -upgrade` to initialize the Terraform deployment.
+5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
+6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
++
+#### Existing AKS cluster
+1. Import the existing cluster resource first with the command: ` terraform import azurerm_kubernetes_cluster.k8s <aksResourceId>`
+2. Add the oms_agent add-on profile to the existing azurerm_kubernetes_cluster resource.
+ ```
+ oms_agent {
+ log_analytics_workspace_id = var.workspace_resource_id
+ msi_auth_for_monitoring_enabled = true
+ }
+ ```
+3. Copy the DCR and DCRA resources from the Terraform templates
+4. Run `terraform plan -out main.tfplan` and make sure the change is adding the oms_agent property. Note: If the `azurerm_kubernetes_cluster` resource defined is different during terraform plan, the existing cluster will get destroyed and recreated.
+5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
+
+> [!TIP]
+> - Edit the `main.tf` file appropriately before running the terraform template
+> - Data will start flowing after 10 minutes since the cluster needs to be ready first
+> - WorkspaceID needs to match the format `/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/example-resource-group/providers/Microsoft.OperationalInsights/workspaces/workspaceValue`
+> - If resource group already exists, run `terraform import azurerm_resource_group.rg /subscriptions/<Subscription_ID>/resourceGroups/<Resource_Group_Name>` before terraform plan
+
+### [Azure Policy](#tab/policy)
+
+#### Azure Portal
+
+1. From the **Definitions** tab of the **Policy** menu in the Azure portal, create a policy definition with the following details.
+
+ - **Definition location**: Azure subscription where the policy definition should be stored.
+ - **Name**: AKS-Monitoring-Addon
+ - **Description**: Azure custom policy to enable the Monitoring Add-on onto Azure Kubernetes clusters.
+ - **Category**: Select **Use existing** and then *Kubernetes* from the dropdown list.
+ - **Policy rule**: Replace the existing sample JSON with the contents of [https://aka.ms/aks-enable-monitoring-custom-policy](https://aka.ms/aks-enable-monitoring-custom-policy).
+
+1. Select the new policy definition **AKS Monitoring Addon**.
+1. Select **Assign** and specify a **Scope** of where the policy should be assigned.
+1. Select **Next** and provide the resource ID of the Log Analytics workspace.
+1. Create a remediation task if you want to apply the policy to existing AKS clusters in the selected scope.
+1. Select **Review + create** to create the policy assignment.
+
+#### Azure CLI
+
+1. Download Azure Policy template and parameter files.
+
+ - Template file: [https://aka.ms/enable-monitoring-msi-azure-policy-template](https://aka.ms/enable-monitoring-msi-azure-policy-template)
+ - Parameter file: [https://aka.ms/enable-monitoring-msi-azure-policy-parameters](https://aka.ms/enable-monitoring-msi-azure-policy-parameters)
++
+2. Create the policy definition using the following CLI command:
+
+ ```
+ az policy definition create --name "AKS-Monitoring-Addon-MSI" --display-name "AKS-Monitoring-Addon-MSI" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules azure-policy.rules.json --params azure-policy.parameters.json
+ ```
+
+2. Create the policy definition using the following CLI command:
+
+ ```
+ az policy assignment create --name aks-monitoring-addon --policy "AKS-Monitoring-Addon-MSI" --assign-identity --identity-scope /subscriptions/<subscriptionId> --role Contributor --scope /subscriptions/<subscriptionId> --location <location> --role Contributor --scope /subscriptions/<subscriptionId> -p "{ \"workspaceResourceId\": { \"value\": \"/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>\" } }"
+ ```
+
+After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring.
+++++++
+## Enable full monitoring with Azure portal
+Using the Azure portal, you can enable both Managed Prometheus and Container insights at the same time.
+
+> [!NOTE]
+> If you want to enabled Managed Prometheus without Container insights, then [enable it from the Azure Monitor workspace](./kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) as described below.
+
+### New AKS cluster (Prometheus and Container insights)
+
+When you create a new AKS cluster in the Azure portal, you can enable Prometheus, Container insights, and Grafana from the **Integrations** tab. In the Azure Monitor section, select either **Default configuration** or **Custom configuration** if you want to specify which workspaces to use. You can perform additional configuration once the cluster is created.
++
+### Existing cluster (Prometheus and Container insights)
+
+This option enables Container insights and optionally Prometheus and Grafana on an existing AKS cluster.
+
+1. Either select **Insights** from the cluster's menu OR select **Containers** from the **Monitor** menu, **Unmonitored clusters** tab, and click **Enable** next to a cluster.
+ 1. If Container insights isn't enabled for the cluster, then you're presented with a screen identifying which of the features have been enabled. Click **Configure monitoring**.
+
+ :::image type="content" source="media/aks-onboard/configure-monitoring-screen.png" lightbox="media/aks-onboard/configure-monitoring-screen.png" alt-text="Screenshot that shows the configuration screen for a cluster.":::
+
+ 2. If Container insights has already been enabled on the cluster, select the **Monitoring Settings** button to modify the configuration.
+
+ :::image type="content" source="media/aks-onboard/monitor-settings-button.png" lightbox="media/aks-onboard/monitor-settings-button.png" alt-text="Screenshot that shows the monitoring settings button for a cluster.":::
+
+2. **Container insights** will be enabled. **Select** the checkboxes for **Enable Prometheus metrics** and **Enable Grafana** if you also want to enable them for the cluster. If you have existing Azure Monitor workspace and Grafana workspace, then they're selected for you.
+
+ :::image type="content" source="media/prometheus-metrics-enable/configure-container-insights.png" lightbox="media/prometheus-metrics-enable/configure-container-insights.png" alt-text="Screenshot that shows the dialog box to configure Container insights with Prometheus and Grafana.":::
+
+3. Click **Advanced settings** to select alternate workspaces or create new ones. The **Cost presets** setting allows you to modify the default collection details to reduce your monitoring costs. See [Enable cost optimization settings in Container insights](./container-insights-cost-config.md) for details.
+
+ :::image type="content" source="media/aks-onboard/advanced-settings.png" lightbox="media/aks-onboard/advanced-settings.png" alt-text="Screenshot that shows the advanced settings dialog box.":::
+
+4. Click **Configure** to save the configuration.
+
+### Existing cluster (Prometheus only)
+
+This option enables Prometheus metrics on a cluster without enabling Container insights.
+
+1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your workspace.
+1. Select **Monitored clusters** in the **Managed Prometheus** section to display a list of AKS clusters.
+1. Select **Configure** next to the cluster you want to enable.
+
+ :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot that shows an Azure Monitor workspace with a Prometheus configuration.":::
+
+### Existing cluster (Add Prometheus)
++
+1. Select **Containers** from the **Monitor** menu, **Monitored clusters** tab, and click **Configure** next to a cluster in the **Managed Prometheus** column.
++
+## Enable Windows metrics collection (preview)
+
+> [!NOTE]
+> There is no CPU/Memory limit in windows-exporter-daemonset.yaml so it may over-provision the Windows nodes
+> For more details see [Resource reservation](https://kubernetes.io/docs/concepts/configuration/windows-resource-management/#resource-reservation)
+>
+> As you deploy workloads, set resource memory and CPU limits on containers. This also subtracts from NodeAllocatable and helps the cluster-wide scheduler in determining which pods to place on which nodes.
+> Scheduling pods without limits may over-provision the Windows nodes and in extreme cases can cause the nodes to become unhealthy.
++
+As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon container (prometheus_collector), Windows metric collection has been enabled for the AKS clusters. Onboarding to the Azure Monitor Metrics add-on enables the Windows DaemonSet pods to start running on your node pools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow these steps to enable the pods to collect metrics from your Windows node pools.
+
+1. Manually install windows-exporter on AKS nodes to access Windows metrics.
+ Enable the following collectors:
+
+ * `[defaults]`
+ * `container`
+ * `memory`
+ * `process`
+ * `cpu_info`
+
+ Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file:
+
+ ```
+ kubectl apply -f windows-exporter-daemonset.yaml
+ ```
+
+1. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster. Set the `windowsexporter` and `windowskubeproxy` Booleans to `true`. For more information, see [Metrics add-on settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-add-on-settings-configmap).
+1. Enable the recording rules that are required for the out-of-the-box dashboards:
+
+ * If onboarding using the CLI, include the option `--enable-windows-recording-rules`.
+ * If onboarding using an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
+ * If the cluster is already onboarded, use [this ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [this parameter file](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) to create the rule groups.
+++++
+## Verify deployment
+Use the [kubectl command line tool](../../aks/learn/quick-kubernetes-deploy-cli.md#connect-to-the-cluster) to verify that the agent is deployed properly.
+
+### Managed Prometheus
+
+**Verify that the DaemonSet was deployed properly on the Linux node pools**
+
+```AzureCLI
+kubectl get ds ama-metrics-node --namespace=kube-system
+```
+
+The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
+
+```output
+User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-metrics-node 1 1 1 1 1 <none> 10h
+```
+
+**Verify that Windows nodes were deployed properly**
+
+```AzureCLI
+kubectl get ds ama-metrics-win-node --namespace=kube-system
+```
+
+The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
+
+```output
+User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-metrics-win-node 3 3 3 3 3 <none> 10h
+```
+
+**Verify that the two ReplicaSets were deployed for Prometheus**
+
+```AzureCLI
+kubectl get rs --namespace=kube-system
+```
+
+The output should resemble the following example:
+
+```output
+User@aksuser:~$kubectl get rs --namespace=kube-system
+NAME DESIRED CURRENT READY AGE
+ama-metrics-5c974985b8 1 1 1 11h
+ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
+```
++
+### Container insights
+
+**Verify that the DaemonSets were deployed properly on the Linux node pools**
+
+```AzureCLI
+kubectl get ds ama-logs --namespace=kube-system
+```
+
+The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
+
+```output
+User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
+```
+
+**Verify that Windows nodes were deployed properly**
+
+```
+kubectl get ds ama-metrics-win-node --namespace=kube-system
+```
+
+The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
+
+```output
+User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ama-logs-windows 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
+```
++
+**Verify deployment of the Container insights solution**
+
+```
+kubectl get deployment ama-logs-rs -n=kube-system
+```
+
+The output should resemble the following example:
+
+```output
+User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ama-logs-rs 1 1 1 1 3h
+```
+
+**View configuration with CLI**
+
+Use the `aks show` command to find out whether the solution is enabled, the Log Analytics workspace resource ID, and summary information about the cluster.
+
+```azurecli
+az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster>
+```
+
+The command will return JSON-formatted information about the solution. The `addonProfiles` section should include information on the `omsagent` as in the following example:
+
+```output
+"addonProfiles": {
+ "omsagent": {
+ "config": {
+ "logAnalyticsWorkspaceResourceID": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/microsoft.operationalinsights/workspaces/my-workspace",
+ "useAADAuth": "true"
+ },
+ "enabled": true,
+ "identity": null
+ },
+}
+```
++
+## Resources provisioned
+
+When you enable monitoring, the following resources are created in your subscription:
+
+| Resource Name | Resource Type | Resource Group | Region/Location | Description |
+|:|:|:|:|:|
+| `MSCI-<aksclusterregion>-<clustername>` | **Data Collection Rule** | Same as cluster | Same as Log Analytics workspace | This data collection rule is for log collection by Azure Monitor agent, which uses the Log Analytics workspace as destination, and is associated to the AKS cluster resource. |
+| `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection Rule** | Same as cluster | Same as Azure Monitor workspace | This data collection rule is for prometheus metrics collection by metrics addon, which has the chosen Azure monitor workspace as destination, and also it is associated to the AKS cluster resource |
+| `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection endpoint** | Same as cluster | Same as Azure Monitor workspace | This data collection endpoint is used by the above data collection rule for ingesting Prometheus metrics from the metrics addon|
+
+When you create a new Azure Monitor workspace, the following additional resources are created as part of it
+
+| Resource Name | Resource Type | Resource Group | Region/Location | Description |
+|:|:|:|:|:|
+| `<azuremonitor-workspace-name>` | **Data Collection Rule** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same as Azure Monitor Workspace | DCR created when you use OSS Prometheus server to Remote Write to Azure Monitor Workspace. |
+| `<azuremonitor-workspace-name>` | **Data Collection Endpoint** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same as Azure Monitor Workspace | DCE created when you use OSS Prometheus server to Remote Write to Azure Monitor Workspace.|
+
++
+## Differences between Windows and Linux clusters
+
+The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
+
+- Windows doesn't have a Memory RSS metric. As a result, it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+- Disk storage capacity information isn't available for Windows nodes.
+- Only pod environments are monitored, not Docker environments.
+- With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.
+
+>[!NOTE]
+> Container insights support for the Windows Server 2022 operating system is in preview.
++
+The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet secure port (10250) within the cluster to collect node and container performance-related metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows node and container performance-related metrics collection to work.
+
+If you have a Kubernetes cluster with Windows nodes, review and configure the network security group and network policies to make sure the Kubelet secure port (:10250) is open for both inbound and outbound in the cluster's virtual network.
+++
+## Next steps
+
+* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
+* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Kubernetes Monitoring Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-firewall.md
+
+ Title: Network firewall requirements for monitoring Kubernetes cluster
+description: Proxy and firewall configuration information required for the containerized agent to communicate with Managed Prometheus and Container insights.
+ Last updated : 11/14/2023+++
+# Network firewall requirements for monitoring Kubernetes cluster
+
+The following table lists the proxy and firewall configuration information required for the containerized agent to communicate with Managed Prometheus and Container insights. All network traffic from the agent is outbound to Azure Monitor.
+
+## Azure public cloud
+
+| Endpoint| Purpose | Port |
+|:|:|:|
+| `*.ods.opinsights.azure.com` | | 443 |
+| `*.oms.opinsights.azure.com` | | 443 |
+| `dc.services.visualstudio.com` | | 443 |
+| `*.monitoring.azure.com` | | 443 |
+| `login.microsoftonline.com` | | 443 |
+| `global.handler.control.monitor.azure.com` | Access control service | 443 |
+| `<cluster-region-name>.ingest.monitor.azure.com` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
+| `<cluster-region-name>.handler.control.monitor.azure.com` | Fetch data collection rules for specific cluster | 443 |
+
+## Microsoft Azure operated by 21Vianet cloud
+
+| Endpoint| Purpose | Port |
+|:|:|:|
+| `*.ods.opinsights.azure.cn` | Data ingestion | 443 |
+| `*.oms.opinsights.azure.cn` | Azure Monitor agent (AMA) onboarding | 443 |
+| `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 |
+| `global.handler.control.monitor.azure.cn` | Access control service | 443 |
+| `<cluster-region-name>.handler.control.monitor.azure.cn` | Fetch data collection rules for specific cluster | 443 |
+
+## Azure Government cloud
+
+| Endpoint| Purpose | Port |
+|:|:|:|
+| `*.ods.opinsights.azure.us` | Data ingestion | 443 |
+| `*.oms.opinsights.azure.us` | Azure Monitor agent (AMA) onboarding | 443 |
+| `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 |
+| `global.handler.control.monitor.azure.us` | Access control service | 443 |
+| `<cluster-region-name>.handler.control.monitor.azure.us` | Fetch data collection rules for specific cluster | 443 |
++
+## Next steps
+
+* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
+* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Enable scraping of Prometheus metrics by Azure Monitor managed service for Prome
- Select the option **Enable Prometheus metrics** when you [create an AKS cluster](../../aks/learn/quick-kubernetes-deploy-portal.md). - Select the option **Enable Prometheus metrics** when you enable Container insights on an existing [AKS cluster](container-insights-enable-aks.md) or [Azure Arc-enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md).-- Enable for an existing [AKS cluster](../essentials/prometheus-metrics-enable.md) or [Arc-enabled Kubernetes cluster (preview)](../essentials/prometheus-metrics-from-arc-enabled-cluster.md).
+- Enable for an existing [AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) or [Arc-enabled Kubernetes cluster (preview)](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
If you already have a Prometheus environment that you want to use for your AKS clusters, then enable Azure Monitor managed service for Prometheus and then use remote-write to send data to your existing Prometheus environment. You can also [use remote-write to send data from your existing self-managed Prometheus environment to Azure Monitor managed service for Prometheus](../essentials/prometheus-remote-write.md).
Once Container insights is enabled for a cluster, perform the following actions
- Container insights collects many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics). You can disable collection of these metrics by configuring Container insights to only collect **Logs and events** as described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#enable-cost-settings). This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights. - Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. -- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logging-v2.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md).
+- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logs-schema.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md).
If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
azure-monitor Prometheus Metrics Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-disable.md
- Title: Disable collecting Prometheus metrics on an Azure Kubernetes Service cluster
-description: Disable the collection of Prometheus metrics from an Azure Kubernetes Service cluster and remove the agent from the cluster nodes.
---- Previously updated : 07/30/2023---
-# Disable Prometheus metrics collection from an AKS cluster
-
-Currently, the Azure CLI is the only option to remove the metrics add-on from your AKS cluster, and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
-
-The `az aks update --disable-azure-monitor-metrics` command:
-
-+ Removes the ama-metrics agent from the cluster nodes.
-+ Deletes the recording rules created for that cluster.
-+ Deletes the data collection endpoint (DCE).
-+ Deletes the data collection rule (DCR).
-+ Deletes the DCRA and recording rules groups created as part of onboarding.
-
-> [!NOTE]
-> This action doesn't remove any existing data stored in your Azure Monitor workspace.
-
-```azurecli
-az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
-```
-
-## Next steps
--- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)-- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md)-- [Use Azure Monitor managed service for Prometheus as the data source for Grafana](../essentials/prometheus-grafana.md)-- [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus](../essentials/prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-enable.md
- Title: Enable Azure Monitor managed service for Prometheus
-description: Enable Azure Monitor managed service for Prometheus and configure data collection from your Azure Kubernetes Service (AKS) cluster.
---- Previously updated : 07/30/2023---
-# Collect Prometheus metrics from an AKS cluster
-This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you perform this configuration, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. This sends data to the Azure Monitor workspace that you specify.
-
-> [!NOTE]
-> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster. However, both processes use the Azure Monitor agent. For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md)..
-
-The Azure Monitor metrics agent's architecture utilizes a ReplicaSet and a DaemonSet. The ReplicaSet pod scrapes cluster-wide targets such as `kube-state-metrics` and custom application targets that are specified. The DaemonSet pods scrape targets solely on the node that the respective pod is deployed on, such as `node-exporter`. This is so that the agent can scale as the number of nodes and pods on a cluster increases.
-
-## Prerequisites
--- You must either have an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) or [create a new one](../essentials/azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace).-- The cluster must use [managed identity authentication](../../aks/use-managed-identity.md).-- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor workspace:
- - Microsoft.ContainerService
- - Microsoft.Insights
- - Microsoft.AlertsManagement
-
-> [!NOTE]
-> `Contributor` permission is enough for enabling the addon to send data to the Azure Monitor workspace. You will need `Owner` level permission in case you're trying to link your Azure Monitor Workspace to view metrics in Azure Managed Grafana. This is required because the user executing the onboarding step, needs to be able to give the Azure Managed Grafana System Identity `Monitoring Reader` role on the Azure Monitor Workspace to query the metrics.
----
-## Enable Prometheus metric collection
-Use any of the following methods to install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace.
-
-### [Azure portal](#tab/azure-portal)
-
-There are multiple options to enable Prometheus metrics on your cluster from the Azure portal.
-
-#### New cluster
-When you create a new AKS cluster in the Azure portal, you can enable Prometheus, Container insights, and Grafana from the **Integrations** tab.
--
-#### From the Azure Monitor workspace
-This option enables Prometheus metrics on a cluster without enabling Container insights.
-
-1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your workspace.
-1. Select **Monitored clusters** in the **Managed Prometheus** section to display a list of AKS clusters.
-1. Select **Configure** next to the cluster you want to enable.
-
- :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot that shows an Azure Monitor workspace with a Prometheus configuration.":::
-
-#### From an existing cluster monitored with Container insights
-This option adds Prometheus metrics to a cluster already enabled for Container insights.
-
-1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
-2. Click **Insights**.
-3. Click **Monitor settings**.
-
- :::image type="content" source="media/prometheus-metrics-enable/aks-cluster-monitor-settings.png" lightbox="media/prometheus-metrics-enable/aks-cluster-monitor-settings.png" alt-text="Screenshot of button for monitor settings for an AKS cluster.":::
-
-4. Click the checkbox for **Enable Prometheus metrics** and select your Azure Monitor workspace.
-5. To send the collected metrics to Grafana, select a Grafana workspace. See [Create an Azure Managed Grafana instance](../../managed-grafan) for details on creating a Grafana workspace.
-
- :::image type="content" source="media/prometheus-metrics-enable/aks-cluster-monitor-settings-details.png" lightbox="media/prometheus-metrics-enable/aks-cluster-monitor-settings-details.png" alt-text="Screenshot of monitor settings for an AKS cluster.":::
-
-6. Click **Configure** to complete the configuration.
-
-See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations-during-enablementdeployment)
-
-#### From an existing cluster
-This option enables Prometheus, Grafana, and Container insights on a cluster.
-
-1. Open the clusters menu in the Azure portal and select **Insights**.
-3. Select **Configure monitoring**.
-4. Container insights is already enabled. Select the checkboxes for **Enable Prometheus metrics** and **Enable Grafana**. If you have existing Azure Monitor workspace and Grafana workspace, then they're selected for you. Click **Advanced settings** to select alternate workspaces or create new ones.
-
- :::image type="content" source="media/prometheus-metrics-enable/configure-container-insights.png" lightbox="media/prometheus-metrics-enable/configure-container-insights.png" alt-text="Screenshot that shows that show the dialog box to configure Container insights with Prometheus and Grafana.":::
-
-5. Click **Configure** to save the configuration.
--
-### [CLI](#tab/cli)
-
-#### Prerequisites
--- The aks-preview extension must be uninstalled by using the command `az extension remove --name aks-preview`. For more information on how to uninstall a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).-- Az CLI version of 2.49.0 or higher is required for this feature. Check the aks-preview version by using the `az version` command.-
-#### Install the metrics add-on
-
-Use `az aks create` or `az aks update` with the `-enable-azure-monitor-metrics` option to install the metrics add-on. Depending on the Azure Monitor workspace and Grafana workspace you want to use, choose one of the following options:
--- **Create a new default Azure Monitor workspace.**<br>
-If no Azure Monitor workspace is specified, a default Azure Monitor workspace is created in a resource group with the name `DefaultRG-<cluster_region>` and is named `DefaultAzureMonitorWorkspace-<mapped_region>`.
--
- ```azurecli
- az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
- ```
--- **Use an existing Azure Monitor workspace.**<br>
-If the existing Azure Monitor workspace is already linked to one or more Grafana workspaces, data is available in that Grafana workspace.
-
- ```azurecli
- az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
- ```
--- **Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br>
-This option creates a link between the Azure Monitor workspace and the Grafana workspace.
-
- ```azurecli
- az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
- ```
-
-The output for each command looks similar to the following example:
-
-```json
-"azureMonitorProfile": {
- "metrics": {
- "enabled": true,
- "kubeStateMetrics": {
- "metricAnnotationsAllowList": "",
- "metricLabelsAllowlist": ""
- }
- }
-}
-```
-
-#### Optional parameters
-You can use the following optional parameters with the previous commands:
-
-| Parameter | Description |
-|:|:|
-| `--ksm-metric-annotations-allow-list` | Comma-separated list of Kubernetes annotations keys used in the resource's kube_resource_annotations metric. For example, kube_pod_annotations is the annotations metric for the pods resource. By default, this metric contains only name and namespace labels. To include more annotations, provide a list of resource names in their plural form and Kubernetes annotation keys that you want to allow for them. A single `*` can be provided for each resource to allow any annotations, but this has severe performance implications. For example, `pods=[kubernetes.io/team,...],namespaces=[kubernetes.io/team],...`. |
-| `--ksm-metric-labels-allow-list` | Comma-separated list of more Kubernetes label keys that is used in the resource's kube_resource_labels metric kube_resource_labels metric. For example, kube_pod_labels is the labels metric for the pods resource. By default this metric contains only name and namespace labels. To include more labels, provide a list of resource names in their plural form and Kubernetes label keys that you want to allow for them A single `*` can be provided for each resource to allow any labels, but i this has severe performance implications. For example, `pods=[app],namespaces=[k8s-label-1,k8s-label-n,...],...`. |
-| `--enable-windows-recording-rules` | lets you enable the recording rule groups required for proper functioning of the Windows dashboards. |
-
-**Use annotations and labels.**
-
-```azurecli
-az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]"
-```
-
-The output is similar to the following example:
-
-```json
- "azureMonitorProfile": {
- "metrics": {
- "enabled": true,
- "kubeStateMetrics": {
- "metricAnnotationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]",
- "metricLabelsAllowlist": "namespaces=[k8s-label-1,k8s-label-n]"
- }
- }
- }
-```
-
-## [Azure Resource Manager](#tab/resource-manager)
-
-### Prerequisites
--- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor workspace subscription, register the Azure Monitor workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.-- The template must be deployed in the same resource group as the Azure Managed Grafana instance.-- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Reader` role directly by deploying the template.-
-### Retrieve required values for Grafana resource
-
-On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-
-If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, the list of already existing Grafana integrations is needed. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
-
-```json
-"properties": {
- "grafanaIntegrations": {
- "azureMonitorWorkspaceIntegrations": [
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_1"
- },
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- }
- ]
- }
-}
-```
-
-### Download and edit the template and the parameter file
-
-1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**.
-1. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**.
-1. Edit the values in the parameter file.
-
- | Parameter | Value |
- |:|:|
- | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
- | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
- | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
- | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
- | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric. |
- | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys to be used in the resource's annotations metric. |
- | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
- | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
- | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
-
-1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. The following example is similar:
-
- ```json
- {
- "type": "Microsoft.Dashboard/grafana",
- "apiVersion": "2022-08-01",
- "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
- "sku": {
- "name": "[parameters('grafanaSku')]"
- },
- "location": "[parameters('grafanaLocation')]",
- "properties": {
- "grafanaIntegrations": {
- "azureMonitorWorkspaceIntegrations": [
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_1"
- },
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- },
- {
- "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
- }
- ]
- }
- }
- ```
-
-In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the Azure Resource Manager template (ARM template). If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
-
-The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
-
-## [Bicep](#tab/bicep)
-
-### Prerequisites
--- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.-- The template needs to be deployed in the same resource group as the Azure Managed Grafana instance.-- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Reader` role directly by deploying the template.-
-### Limitation with Bicep deployment
-Currently in Bicep, there's no way to explicitly scope the `Monitoring Reader` role assignment on a string parameter "resource ID" for an Azure Monitor workspace (like in an ARM template). Bicep expects a value of type `resource | tenant`. There also is no REST API [spec](https://github.com/Azure/azure-rest-api-specs) for an Azure Monitor workspace.
-
-Therefore, the default scoping for the `Monitoring Reader` role is on the resource group. The role is applied on the same Azure Monitor workspace (by inheritance), which is the expected behavior. After you deploy this Bicep template, the Grafana instance is given `Monitoring Reader` permissions for all the Azure Monitor workspaces in that resource group.
-
-### Retrieve required values for a Grafana resource
-
-On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-
-If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, the list of already existing Grafana integrations is needed. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
-
-```json
-"properties": {
- "grafanaIntegrations": {
- "azureMonitorWorkspaceIntegrations": [
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_1"
- },
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- }
- ]
- }
-}
-```
-
-### Download and edit templates and the parameter file
-
-1. Download the [main Bicep template](https://aka.ms/azureprometheus-enable-bicep-template). Save it as **FullAzureMonitorMetricsProfile.bicep**.
-2. Download the [parameter file](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main Bicep template.
-3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files into the same directory as the main Bicep template.
-4. Edit the values in the parameter file.
-5. The main Bicep template creates all the required resources. It uses two modules for creating the Data Collection Rule Associations (DCRA) and Azure Monitor metrics profile resources from the other two Bicep files.
-
- | Parameter | Value |
- |:|:|
- | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
- | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
- | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
- | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
- | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys used in the resource's labels metric. |
- | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys used in the resource's annotations metric. |
- | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
- | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
- | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
-
-1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. The following example is similar:
-
- ```json
- {
- "type": "Microsoft.Dashboard/grafana",
- "apiVersion": "2022-08-01",
- "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
- "sku": {
- "name": "[parameters('grafanaSku')]"
- },
- "location": "[parameters('grafanaLocation')]",
- "properties": {
- "grafanaIntegrations": {
- "azureMonitorWorkspaceIntegrations": [
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_1"
- },
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- },
- {
- "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
- }
- ]
- }
- }
- ```
-
-In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the ARM template. If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
-
-The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
-
-## [Terraform](#tab/terraform)
-
-### Prerequisites
--- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.-- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.-- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Reader role directly by deploying the template.-
-### Retrieve required values for a Grafana resource
-
-On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-
-If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the azure_monitor_workspace_integrations block(shown here) in main.tf with the list of grafana integrations.
-
-```.tf
- azure_monitor_workspace_integrations {
- resource_id = var.monitor_workspace_id[var.monitor_workspace_id1, var.monitor_workspace_id2]
- }
-```
-
-### Download and edit the templates
-
-If you're deploying a new AKS cluster using Terraform with managed Prometheus addon enabled, follow these steps:
-
-1. Download all files under [AddonTerraformTemplate](https://aka.ms/AAkm357).
-2. Edit the variables in variables.tf file with the correct parameter values.
-3. Run `terraform init -upgrade` to initialize the Terraform deployment.
-4. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
-5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
--
-Note: Pass the variables for `annotations_allowed` and `labels_allowed` keys in main.tf only when those values exist. These are optional blocks.
-
-> [!NOTE]
-> Edit the main.tf file appropriately before running the terraform template. Add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template. Else, older values gets deleted and replaced with what is there in the template during deployment. Users with 'User Access Administrator' role in the subscription of the AKS cluster can enable 'Monitoring Reader' role directly by deploying the template. Edit the grafanaSku parameter if you're using a nonstandard SKU and finally run this template in the Grafana Resource's resource group.
-
-## [Azure Policy](#tab/azurepolicy)
-
-### Prerequisites
--- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.-
-### Download Azure Policy rules and parameters and deploy
-
-1. Download the main [Azure Policy rules template](https://aka.ms/AddonPolicyMetricsProfile). Save it as **AddonPolicyMetricsProfile.rules.json**.
-1. Download the [parameter file](https://aka.ms/AddonPolicyMetricsProfile.parameters). Save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
-1. Create the policy definition using the following command:
-
- `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules AddonPolicyMetricsProfile.rules.json --params AddonPolicyMetricsProfile.parameters.json`
-
-1. After you create the policy definition, in the Azure portal, select **Policy** > **Definitions**. Select the policy definition you created.
-1. Select **Assign**, go to the **Parameters** tab, and fill in the details. Select **Review + Create**.
-1. After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring. If you want to apply the policy to an existing AKS cluster, create a **Remediation task** for that AKS cluster resource after you go to the **Policy Assignment**.
-1. Now you should see metrics flowing in the existing Azure Managed Grafana instance, which is linked with the corresponding Azure Monitor workspace.
-
-Afterwards, if you create a new Managed Grafana instance, you can link it with the corresponding Azure Monitor workspace from the **Linked Grafana Workspaces** tab of the relevant **Azure Monitor Workspace** page. The `Monitoring Reader` role must be assigned to the managed identity of the Managed Grafana instance with the scope as the Azure Monitor workspace, so that Grafana has access to query the metrics. Use the following instructions to do so:
-
-1. On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-
-1. Copy the value of the `principalId` field for the `SystemAssigned` identity.
-
- ```json
- "identity": {
- "principalId": "00000000-0000-0000-0000-000000000000",
- "tenantId": "00000000-0000-0000-0000-000000000000",
- "type": "SystemAssigned"
- },
- ```
-1. On the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** > **Add role assignment**.
-1. Select `Monitoring Reader`.
-1. Select **Managed identity** > **Select members**.
-1. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
-1. Choose **Select** > **Review+assign**.
-
-### Deploy the template
-
-Deploy the template with the parameter file by using any valid method for deploying ARM templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
-
-### Limitations during enablement/deployment
--- Ensure that you update the `kube-state metrics` annotations and labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature might not run as expected.-- A data collection rule and data collection endpoint are created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. Currently, these names can't be modified.-- You must get the existing Azure Monitor workspace integrations for a Grafana instance and update the ARM template with it. Otherwise, the ARM deployment gets over-written, which removes existing integrations.--
-## Enable Windows metrics collection (preview)
-
-> [!NOTE]
-> There is no CPU/Memory limit in windows-exporter-daemonset.yaml so it may over-provision the Windows nodes
-> For more details see [Resource reservation](https://kubernetes.io/docs/concepts/configuration/windows-resource-management/#resource-reservation)
->
-> As you deploy workloads, set resource memory and CPU limits on containers. This also subtracts from NodeAllocatable and helps the cluster-wide scheduler in determining which pods to place on which nodes.
-> Scheduling pods without limits may over-provision the Windows nodes and in extreme cases can cause the nodes to become unhealthy.
--
-As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon container (prometheus_collector), Windows metric collection has been enabled for the AKS clusters. Onboarding to the Azure Monitor Metrics add-on enables the Windows DaemonSet pods to start running on your node pools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow these steps to enable the pods to collect metrics from your Windows node pools.
-
-1. Manually install windows-exporter on AKS nodes to access Windows metrics.
- Enable the following collectors:
-
- * `[defaults]`
- * `container`
- * `memory`
- * `process`
- * `cpu_info`
-
- Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file:
-
- ```
- kubectl apply -f windows-exporter-daemonset.yaml
- ```
-
-1. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster. Set the `windowsexporter` and `windowskubeproxy` Booleans to `true`. For more information, see [Metrics add-on settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-add-on-settings-configmap).
-1. Enable the recording rules that are required for the out-of-the-box dashboards:
-
- * If onboarding using the CLI, include the option `--enable-windows-recording-rules`.
- * If onboarding using an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
-
-## Verify deployment
-
-1. Run the following command to verify that the DaemonSet was deployed properly on the Linux node pools:
-
- ```
- kubectl get ds ama-metrics-node --namespace=kube-system
- ```
-
- The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
-
- ```
- User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
- NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- ama-metrics-node 1 1 1 1 1 <none> 10h
- ```
-
-1. Run the following command to verify that the DaemonSet was deployed properly on the Windows node pools:
-
- ```
- kubectl get ds ama-metrics-win-node --namespace=kube-system
- ```
-
- The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
-
- ```
- User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
- NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- ama-metrics-win-node 3 3 3 3 3 <none> 10h
- ```
-
-1. Run the following command to verify that the two ReplicaSets were deployed properly:
-
- ```
- kubectl get rs --namespace=kube-system
- ```
-
- The output should resemble the following example:
-
- ```
- User@aksuser:~$kubectl get rs --namespace=kube-system
- NAME DESIRED CURRENT READY AGE
- ama-metrics-5c974985b8 1 1 1 11h
- ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
- ```
-## Artifacts/Resources provisioned/created as a result of metrics addon enablement for an AKS cluster
-
-When you enable metrics addon, the following resources are provisioned:
-
-| Resource Name | Resource Type | Resource Group | Region/Location | Description |
- |:|:|:|:|:|
- | `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection Rule** | Same Resource group as AKS cluster resource | Same region as Azure Monitor Workspace | This data collection rule is for prometheus metrics collection by metrics addon, which has the chosen Azure monitor workspace as destination, and also it is associated to the AKS cluster resource |
- | `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection endpoint** | Same Resource group as AKS cluster resource | Same region as Azure Monitor Workspace | This data collection endpoint is used by the above data collection rule for ingesting Prometheus metrics from the metrics addon|
-
-When you create a new Azure Monitor workspace, the following additional resources are created as part of it
-
-| Resource Name | Resource Type | Resource Group | Region/Location | Description |
- |:|:|:|:|:|
- | `<azuremonitor-workspace-name>` | **System Data Collection Rule** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same region as Azure Monitor Workspace | This is **system** data collection rule that customers can use when they use OSS Prometheus server to Remote Write to Azure Monitor Workspace |
- | `<azuremonitor-workspace-name>` | **System Data Collection endpoint** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same region as Azure Monitor Workspace | This is **system** data collection endpoint that customers can use when they use OSS Prometheus server to Remote Write to Azure Monitor Workspace |
-
-
-## HTTP Proxy
-
-Azure Monitor metrics addon supports HTTP Proxy and uses the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
-
-## Network firewall requirements
-
-**Azure public cloud**
-
-The following table lists the firewall configuration required for Azure monitor Prometheus metrics ingestion for Azure Public cloud. All network traffic from the agent is outbound to Azure Monitor.
-
-|Agent resource| Purpose | Port |
-|--|||
-| `global.handler.control.monitor.azure.com` | Access control service/ Azure Monitor control plane service | 443 |
-| `*.ingest.monitor.azure.com` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
-| `*.handler.control.monitor.azure.com` | For querying data collection rules | 443 |
-
-**Azure US Government cloud**
-
-The following table lists the firewall configuration required for Azure monitor Prometheus metrics ingestion for Azure US Government cloud. All network traffic from the agent is outbound to Azure Monitor.
-
-|Agent resource| Purpose | Port |
-|--|||
-| `global.handler.control.monitor.azure.us` | Access control service/ Azure Monitor control plane service | 443 |
-| `*.ingest.monitor.azure.us` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
-| `*.handler.control.monitor.azure.us` | For querying data collection rules | 443 |
-
-## Uninstall the metrics add-on
-
-To uninstall the metrics add-on, see [Disable Prometheus metrics collection on an AKS cluster.](./prometheus-metrics-disable.md)
-
-## Supported regions
-
-The list of regions Azure Monitor Metrics and Azure Monitor Workspace is supported in can be found [here](https://aka.ms/ama-metrics-supported-regions) under the Managed Prometheus tag.
-
-## Frequently asked questions
-
-This section provides answers to common questions.
-
-### Does enabling managed service for Prometheus on my Azure Kubernetes Service cluster also enable Container insights?
-
-You have options for how you can collect your Prometheus metrics. If you use the Azure portal and enable Prometheus metrics collection and install the Azure Kubernetes Service (AKS) add-on from the Azure Monitor workspace UX, it won't enable Container insights and collection of log data. When you go to the Insights page on your AKS cluster, you're prompted to enable Container insights to collect log data.<br>
-
-If you use the Azure portal and enable Prometheus metrics collection and install the AKS add-on from the Insights page of your AKS cluster, it enables log collection into a Log Analytics workspace. and Prometheus metrics collection into an Azure Monitor workspace.
-
-## Next steps
--- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)-- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md)-- [Use Azure Monitor managed service for Prometheus as the data source for Grafana](../essentials/prometheus-grafana.md)-- [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus](../essentials/prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics From Arc Enabled Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-from-arc-enabled-cluster.md
- Title: Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)
-description: How to configure your Azure Arc-enabled Kubernetes cluster (preview) to send data to Azure Monitor managed service for Prometheus.
---- Previously updated : 05/07/2023--
-# Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)
-
-This article describes how to configure your Azure Arc-enabled Kubernetes cluster (preview) to send data to Azure Monitor managed service for Prometheus. When you configure your Azure Arc-enabled Kubernetes cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the Azure Monitor agent is installed with a metrics extension. You then specify the Azure Monitor workspace where the data should be sent.
-
-> [!NOTE]
-> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same agent used by Container insights.
-> For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md). For details on adding Prometheus collection to a cluster that already has Container insights enabled, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md).
-
-## Supported configurations
-
-The following configurations are supported:
-
-+ Azure Monitor Managed Prometheus supports monitoring Azure Arc-enabled Kubernetes. For more information, see [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
-+ Docker
-+ Moby
-+ CRI compatible container runtimes such CRI-O
-
-The following configurations are not supported:
-
-+ Windows
-+ Azure Red Hat OpenShift 4
-
-## Prerequisites
-
-+ Prerequisites listed in [Deploy and manage Azure Arc-enabled Kubernetes cluster extensions](../../azure-arc/kubernetes/extensions.md#prerequisites)
-+ An Azure Monitor workspace. To create new workspace, see [Manage an Azure Monitor workspace ](../essentials/azure-monitor-workspace-manage.md).
-+ The cluster must use [managed identity authentication](../../aks/use-managed-identity.md).
-+ The following resource providers must be registered in the subscription of the Arc-enabled Kubernetes cluster and the Azure Monitor workspace:
- + Microsoft.Kubernetes
- + Microsoft.Insights
- + Microsoft.AlertsManagement
-+ The following endpoints must be enabled for outbound access in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud):
- **Azure public cloud**
-
- |Endpoint|Port|
- ||--|
- |*.ods.opinsights.azure.com |443 |
- |*.oms.opinsights.azure.com |443 |
- |dc.services.visualstudio.com |443 |
- |*.monitoring.azure.com |443 |
- |login.microsoftonline.com |443 |
- |global.handler.control.monitor.azure.com |443 |
- | \<cluster-region-name\>.handler.control.monitor.azure.com |443 |
-
-## Create an extension instance
-
-### [Portal](#tab/portal)
-
-### Onboard from Azure Monitor workspace
-
-1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster.
-
-1. Select **Managed Prometheus** to display a list of AKS and Arc clusters.
-1. Select **Configure** for the cluster you want to enable.
--
-### Onboard from Container insights
-
-1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster that you wish to monitor.
-
-1. From the resource pane on the left, select **Insights** under the **Monitoring** section.
-1. On the onboarding page, select **Configure monitoring**.
-1. On the **Configure Container insights** page, select the **Enable Prometheus metrics** checkbox.
-1. Select **Configure**.
--
-### [CLI](#tab/cli)
-
-### Prerequisites
-
-+ The k8s-extension extension must be installed. Install the extension using the command `az extension add --name k8s-extension`.
-+ The k8s-extension version 1.4.1 or higher is required. Check the k8s-extension version by using the `az version` command.
-
-### Create an extension with default values
-
-+ A default Azure Monitor workspace is created in the DefaultRG-<cluster_region> following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
-+ Auto-upgrade is enabled for the extension.
-
-```azurecli
-az k8s-extension create \
name azuremonitor-metrics \cluster-name <cluster-name> \resource-group <resource-group> \cluster-type connectedClusters \extension-type Microsoft.AzureMonitor.Containers.Metrics
-```
-
-### Create an extension with an existing Azure Monitor workspace
-
-If the Azure Monitor workspace is already linked to one or more Grafana workspaces, the data is available in Grafana.
-
-```azurecli
-az k8s-extension create\
name azuremonitor-metrics\cluster-name <cluster-name>\resource-group <resource-group>\cluster-type connectedClusters\extension-type Microsoft.AzureMonitor.Containers.Metrics\configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id>
-```
-
-### Create an extension with an existing Azure Monitor workspace and link with an existing Grafana workspace
-
-This option creates a link between the Azure Monitor workspace and the Grafana workspace.
-
-```azurecli
-az k8s-extension create\
name azuremonitor-metrics\cluster-name <cluster-name>\resource-group <resource-group>\cluster-type connectedClusters\extension-type Microsoft.AzureMonitor.Containers.Metrics\configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> \
-grafana-resource-id=<grafana-workspace-name-resource-id>
-```
-
-### Create an extension with optional parameters
-
-You can use the following optional parameters with the previous commands:
-
-`--configurationsettings.AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist` is a comma-separated list of Kubernetes label keys that will be used in the resource' labels metric. By default the metric contains only name and namespace labels. To include additional labels, provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. For example, `=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...`
-
-`--configurationSettings.AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList` is a comma-separated list of Kubernetes annotations keys that will be used in the resource' labels metric. By default the metric contains only name and namespace labels. To include additional annotations, provide a list of resource names in their plural form and Kubernetes annotation keys you would like to allow for them. For example, `=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...`.
-
-> [!NOTE]
-> A single `*`, for example `'=pods=[*]'` can be provided per resource to allow any labels, however, this has severe performance implications.
--
-```azurecli
-az k8s-extension create\
name azuremonitor-metrics\cluster-name <cluster-name>\resource-group <resource-group>\cluster-type connectedClusters\extension-type Microsoft.AzureMonitor.Containers.Metrics\configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> \
-grafana-resource-id=<grafana-workspace-name-resource-id> \
-AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList="pods=[k8s-annotation-1,k8s-annotation-n]" \
-AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist "namespaces=[k8s-label-1,k8s-label-n]"
-```
-
-### [Resource Manager](#tab/resource-manager)
-
-### Prerequisites
-
-+ If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following the steps in the [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) section of the Azure resource providers and types article.
-
-+ The Azure Monitor workspace and Azure Managed Grafana workspace must already exist.
-+ The template must be deployed in the same resource group as the Azure Managed Grafana workspace.
-+ Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template.
-
-### Create an extension
-
-1. Retrieve required values for the Grafana resource
-
- > [!NOTE]
- > Azure Managed Grafana is not currently available in the Azure US Government cloud.
-
- On the Overview page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-
- If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of already existing Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If the field doesn't exist, the instance hasn't been linked with any Azure Monitor workspace.
-
- ```json
- "properties": {
- "grafanaIntegrations": {
- "azureMonitorWorkspaceIntegrations": [
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_1"
- },
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- }
- ]
- }
- }
- ```
-
-1. Download and edit the template and the parameter file
--
- 1. Download the template at https://aka.ms/azureprometheus-arc-arm-template and save it as *existingClusterOnboarding.json*.
-
- 1. Download the parameter file at https://aka.ms/azureprometheus-arc-arm-template-parameters and save it as *existingClusterParam.json*.
-
-1. Edit the following fields' values in the parameter file.
-
- |Parameter|Value |
- |||
- |`azureMonitorWorkspaceResourceId` |Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the Overview page for the Azure Monitor workspace. |
- |`azureMonitorWorkspaceLocation`|Location of the Azure Monitor workspace. Retrieve from the JSON view on the Overview page for the Azure Monitor workspace. |
- |`clusterResourceId` |Resource ID for the Arc cluster. Retrieve from the **JSON view** on the Overview page for the cluster. |
- |`clusterLocation` |Location of the Arc cluster. Retrieve from the **JSON view** on the Overview page for the cluster. |
- |`metricLabelsAllowlist` |Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric.|
- |`metricAnnotationsAllowList` |Comma-separated list of more Kubernetes label keys to be used in the resource's labels metric. |
- |`grafanaResourceId` |Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the Overview page for the Grafana instance. |
- |`grafanaLocation` |Location for the managed Grafana instance. Retrieve from the **JSON view** on the Overview page for the Grafana instance. |
- |`grafanaSku` |SKU for the managed Grafana instance. Retrieve from the **JSON view** on the Overview page for the Grafana instance. Use the `sku.name`. |
-
-1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. For example:
-
- ```json
- {
- "type": "Microsoft.Dashboard/grafana",
- "apiVersion": "2022-08-01",
- "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
- "sku": {
- "name": "[parameters('grafanaSku')]"
- },
- "location": "[parameters('grafanaLocation')]",
- "properties": {
- "grafanaIntegrations": {
- "azureMonitorWorkspaceIntegrations": [
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_1"
- },
- {
- "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- },
- {
- "azureMonitorWorkspaceResourceId": "[parameters ('azureMonitorWorkspaceResourceId')]"
- }
- ]
- }
- }
- }
- ```
-
- In the example JSON above, `full_resource_id_1` and `full_resource_id_2` are already in the Azure Managed Grafana resource JSON. They're added here to the Azure Resource Manager template (ARM template). If you don't have any existing Grafana integrations, don't include these entries.
-
- The final `azureMonitorWorkspaceResourceId` entry is in the template by default and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
-
-### Verify extension installation status
-
-Once you have successfully created the Azure Monitor extension for your Azure Arc-enabled Kubernetes cluster, you can check the status of the installation using the Azure portal or CLI. Successful installations show the status as `Installed`.
-
-#### Azure portal
-
-1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster with the extension installation.
-
-1. From the resource pane on the left, select the **Extensions** item under the **Setting**' section.
-
-1. An extension with the name **azuremonitor-metrics** is listed, with the current status in the **Install status** column.
-
-#### Azure CLI
-
-Run the following command to show the latest status of the` Microsoft.AzureMonitor.Containers.Metrics` extension.
-
-```azurecli
-az k8s-extension show \
name azuremonitor-metrics \cluster-name <cluster-name> \resource-group <resource-group> \cluster-type connectedClusters
-```
--
-### Delete the extension instance
-
-To delete the extension instance, use the following CLI command:
-
-```azurecli
-az k8s-extension delete --name azuremonitor-metrics -g <cluster_resource_group> -c<cluster_name> -t connectedClusters
-```
-
-The command only deletes the extension instance. The Azure Monitor workspace and its data are not deleted.
-
-## Disconnected clusters
-
-If your cluster is disconnected from Azure for more than 48 hours, Azure Resource Graph won't have information about your cluster. As a result, your Azure Monitor Workspace may have incorrect information about your cluster state.
-
-## Troubleshooting
-
-For issues with the extension, see the [Troubleshooting Guide](./prometheus-metrics-troubleshoot.md).
-
-## Next Steps
-
-+ [Default Prometheus metrics configuration in Azure Monitor ](prometheus-metrics-scrape-default.md)
-+ [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md)
-+ [Use Azure Monitor managed service for Prometheus as data source for Grafana using managed system identity](../essentials/prometheus-grafana.md)
-+ [Configure self-managed Grafana to use Azure Monitor managed service for Prometheus with Microsoft Entra ID](../essentials/prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-multiple-workspaces.md
Routing metrics to more Azure Monitor workspaces can be done through the creatio
## Send same metrics to multiple Azure Monitor workspaces
-You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor workspaces from the same Kubernetes cluster. In case you have a very high volume of metrics, a new Data Collection Endpoint can be created as well. Please refer to the service limits [document](../service-limits.md) regarding ingestion limits. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs and DCEs (if applicable) for your additional Azure Monitor workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, add another DCE (if applicable), add the Monitor Reader Role for the new Azure Monitor workspace and add an additional Azure Monitor workspace integration for Grafana.
+You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor workspaces from the same Kubernetes cluster. In case you have a very high volume of metrics, a new Data Collection Endpoint can be created as well. Please refer to the service limits [document](../service-limits.md) regarding ingestion limits. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and then edit the same Resource Manager templates to add additional DCRs and DCEs (if applicable) for your additional Azure Monitor workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, add another DCE (if applicable), add the Monitor Reader Role for the new Azure Monitor workspace and add an additional Azure Monitor workspace integration for Grafana.
- Add the following parameters: ```json
scrape_configs:
## Next steps - [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).-- [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md).
+- [Collect Prometheus metrics from AKS cluster](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
Title: Minimal Prometheus ingestion profile in Azure Monitor
-description: Describes minimal ingestion profile in Azure Monitor managed service for Prometheus and how you can configure it collect more data.
+description: Describes minimal ingestion profile in Azure Monitor managed service for Prometheus and how you can configure it to collect more data.
- Previously updated : 09/28/2022 Last updated : 1/28/2023
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
# Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus
-This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](prometheus-metrics-enable.md) in Azure Monitor.
+This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) in Azure Monitor.
## Configmaps
To further customize the default jobs to change properties like collection frequ
### Cluster alias The cluster label appended to every time series scraped uses the last part of the full AKS cluster's Azure Resource Manager resource ID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/myclustername`, the cluster label is `myclustername`.
-To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can create this configmap if it doesn't exist in the cluster or you can edit the existing one if its already exists in your cluster.
+To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can create this configmap if it doesn't exist in the cluster or you can edit the existing one if it already exists in your cluster.
The new label also shows up in the cluster parameter dropdown in the Grafana dashboards instead of the default one.
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-default.md
Title: Default Prometheus metrics configuration in Azure Monitor description: This article lists the default targets, dashboards, and recording rules for Prometheus metrics in Azure Monitor. - Previously updated : 09/28/2022 Last updated : 11/28/2023 # Default Prometheus metrics configuration in Azure Monitor
-This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an Azure Kubernetes Service (AKS) cluster](prometheus-metrics-enable.md) for any AKS cluster.
+This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an Azure Kubernetes Service (AKS) cluster](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) for any AKS cluster.
+
+## Minimal ingestion profile
+`Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. For addon based collection, `Minimal ingestion profile` setting is enabled by default. You can modify collection to enable collecting more metrics, as specified below.
## Scrape frequency The default scrape frequency for all default targets and scrapes is 30 seconds. ## Targets scraped by default
+Following targets are **enabled/ON** by default - meaning you don't have to provide any scrape job configuration for scraping these targets, as metrics addon will scrape these targets automatically by default
- `cadvisor` (`job=cadvisor`) - `nodeexporter` (`job=node`)
Two default jobs can be run for Windows that scrape metrics required for the das
- `kube-proxy-windows` (`job=kube-proxy-windows`) > [!NOTE]
-> This requires applying or updating the `ama-metrics-settings-configmap` configmap and installing `windows-exporter` on all Windows nodes. For more information, see the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection).
+> This requires applying or updating the `ama-metrics-settings-configmap` configmap and installing `windows-exporter` on all Windows nodes. For more information, see the [enablement document](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
## Metrics scraped for Windows
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
See [Azure Monitor managed service for Prometheus remote write](prometheus-remot
## Next steps -- [Collect Prometheus metrics from an AKS cluster](../containers/prometheus-metrics-enable.md)
+- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). - [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md) - [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md)
azure-monitor Prometheus Remote Write Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-azure-ad-pod-identity.md
To configure remote write for Azure Monitor managed service for Prometheus using
``` ## Next steps -- [Collect Prometheus metrics from an AKS cluster](../containers/prometheus-metrics-enable.md)
+- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) - [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md) - [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md)
azure-monitor Prometheus Remote Write Azure Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-azure-workload-identity.md
Use the sample yaml below if you're using kube-prometheus-stack:
## Next steps
-* [Collect Prometheus metrics from an AKS cluster](../containers/prometheus-metrics-enable.md)
+* [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
* [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) * [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md) * [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md)
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-managed-identity.md
See [Azure Monitor managed service for Prometheus remote write](prometheus-remot
## Next steps -- [Collect Prometheus metrics from an AKS cluster](../containers/prometheus-metrics-enable.md)
+- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) - [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md) - [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md)
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write.md
az monitor data-collection rule show --name "myCollectionRule" --resource-group
## Next steps - [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).-- [Collect Prometheus metrics from an AKS cluster](../containers/prometheus-metrics-enable.md)
+- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
- [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md) - [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md) - [Configure remote write for Azure Monitor managed service for Prometheus using Azure Workload Identity (preview)](./prometheus-remote-write-azure-workload-identity.md)
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
Several other features don't have a direct cost, but you instead pay for the ing
| Logs | Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. | | Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. | | Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
-| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](essentials/prometheus-metrics-enable.md) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
+| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. | | Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
azure-monitor Azure Monitor Workspace Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
resource workspace 'microsoft.monitor/accounts@2021-06-03-preview' = {
When you create an Azure Monitor workspace, a new resource group is created. The resource group name has the following format: `MA_<azure-monitor-workspace-name>_<location>_managed`, where the tokenized elements are lowercased. The resource group contains both a data collection endpoint and a data collection rule with the same name as the workspace. The resource group and its resources are automatically deleted when you delete the workspace.
-To connect your Azure Monitor managed service for Prometheus to your Azure Monitor workspace, see [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md)
+To connect your Azure Monitor managed service for Prometheus to your Azure Monitor workspace, see [Collect Prometheus metrics from AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
## Delete an Azure Monitor workspace
Output
### [Resource Manager](#tab/resource-manager)
-To set up an Azure monitor workspace as a data source for Grafana using a Resource Manager template, see [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md?tabs=resource-manager#enable-prometheus-metric-collection)
+To set up an Azure monitor workspace as a data source for Grafana using a Resource Manager template, see [Collect Prometheus metrics from AKS cluster](../containers/kubernetes-monitoring-enable.md?tabs=arm#enable-prometheus-and-grafana)
Yes. When you use managed service for Prometheus, you can create your Azure Moni
## Next steps-- [Link a Grafana instance to your Azure Monitor workspace](./prometheus-metrics-enable.md#enable-prometheus-metric-collection) - Learn more about the [Azure Monitor data platform](../data-platform.md). - [Azure Monitor workspace Overview](./azure-monitor-workspace-overview.md)
azure-monitor Data Collection Rule Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-best-practices.md
Title: Best practices for data collection rule creation and management in Azure Monitor description: Details on the best practices to be followed to correctly create and maintain data collection rule in Azure Monitor. Previously updated : 12/14/2022 Last updated : 01/08/2024
azure-monitor Data Collection Rule Create Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-create-edit.md
The following table lists methods to create data collection scenarios using the
|:|:|:| | Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then associate that rule with one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. | | | [Enable VM insights overview](../vm/vminsights-enable-overview.md) | When you enable VM insights on a VM, the Azure Monitor agent is installed, and a DCR is created that collects a predefined set of performance counters. You shouldn't modify this DCR. |
-| Container insights | [Enable Container insights](../containers/prometheus-metrics-enable.md) | When you enable Container insights on a Kubernetes cluster, a containerized version of the Azure Monitor agent is installed, and a DCR is created that collects data according to the configuration you selected. You may need to modify this DCR to add a transformation. |
+| Container insights | [Enable Container insights](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) | When you enable Container insights on a Kubernetes cluster, a containerized version of the Azure Monitor agent is installed, and a DCR is created that collects data according to the configuration you selected. You may need to modify this DCR to add a transformation. |
| Text or JSON logs | [Collect logs from a text or JSON file with Azure Monitor Agent](../agents/data-collection-text-log.md?tabs=portal) | Use the Azure portal to create a DCR to collect entries from a text log on a machine with Azure Monitor Agent. | | Workspace transformation | [Add a transformation in a workspace data collection rule using the Azure portal](../logs/tutorial-workspace-transformations-portal.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. |
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
Title: Use Azure Monitor managed service for Prometheus as data source for Grafa
description: Details on how to configure Azure Monitor managed service for Prometheus as data source for both Azure Managed Grafana and self-hosted Grafana in an Azure virtual machine. Previously updated : 09/28/2022 Last updated : 01/08/2024 # Use Azure Monitor managed service for Prometheus as data source for Grafana using managed system identity
This section provides answers to common questions.
## Next steps - [Configure self-managed Grafana to use Azure-managed Prometheus with Microsoft Entra ID](./prometheus-self-managed-grafana-azure-active-directory.md).-- [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md).
+- [Collect Prometheus metrics for your AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus can currently collect data from any
## Enable The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics. -- To collect Prometheus metrics from your AKS cluster without using Container insights, see [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md).-- To add collection of Prometheus metrics to your cluster using Container insights, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md#send-data-to-azure-monitor-managed-service-for-prometheus).
+- To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md).
- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md).-- To collect Prometheus metrics from your Azure Arc-enabled Kubernetes cluster without using Container insights, see [Collect Prometheus metrics from Azure Arc-enabled Kubernetes cluster](./prometheus-metrics-from-arc-enabled-cluster.md) ## Grafana integration The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for
- Scraping and storing metrics at frequencies less than 1 second isn't supported. - Metrics with the same label names but different cases are rejected during ingestion (ex;- `diskSize(cluster="eastus", node="node1", filesystem="usr_mnt", FileSystem="usr_opt")` is invalid due to `filesystem` and `FileSystem` labels, and are rejected). - Microsoft Azure operated by 21Vianet cloud and Air gapped clouds aren't supported for Azure Monitor managed service for Prometheus.-- To monitor Windows nodes & pods in your cluster(s), follow steps outlined [here](./prometheus-metrics-enable.md#enable-windows-metrics-collection).
+- To monitor Windows nodes & pods in your cluster(s), follow steps outlined [here](../containers/kubernetes-monitoring-enable.md#enable-windows-metrics-collection-preview).
- Azure Managed Grafana isn't currently available in the Azure US Government cloud. - Usage metrics (metrics under `Metrics` menu for the Azure Monitor workspace) - Ingestion quota limits and current usage for any Azure monitor Workspace aren't available yet in US Government cloud. - During node updates, you might experience gaps lasting 1 to 2 minutes in some metric collections from our cluster level collector. This gap is due to a regular action from Azure Kubernetes Service to update the nodes in your cluster. This behavior is expected and occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior.
If you use the Azure portal to enable Prometheus metrics collection and install
## Next steps -- [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md).
+- [Enable Azure Monitor managed service for Prometheus on your Kubernetes clusters](../containers/kubernetes-monitoring-enable.md).
- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
Previously updated : 09/28/2022 Last updated : 11/09/2023 # Azure Monitor managed service for Prometheus rule groups
azure-monitor Prometheus Self Managed Grafana Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-self-managed-grafana-azure-active-directory.md
This section provides answers to common questions.
## Next steps - [Configure Grafana using managed system identity](./prometheus-grafana.md).-- [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md).
+- [Collect Prometheus metrics for your AKS cluster](../containers/kubernetes-monitoring-enable.md).
- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-workbooks.md
This article introduces workbooks for Azure Monitor workspaces and shows you how
## Prerequisites To query Prometheus metrics from an Azure Monitor workspace, you need the following: - An Azure Monitor workspace. To create an Azure Monitor workspace, see [Create an Azure Monitor Workspace](./azure-monitor-workspace-overview.md?tabs=azure-portal.md).-- Your Azure Monitor workspace must be [collecting Prometheus metrics](./prometheus-metrics-enable.md) from an AKS cluster.
+- Your Azure Monitor workspace must be [collecting Prometheus metrics](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) from an AKS cluster.
- The user must be assigned role that can perform the **microsoft.monitor/accounts/read** operation on the Azure Monitor workspace. ## Prometheus Explorer workbook
This section provides answers to common questions.
[!INCLUDE [prometheus-faq-i-see-gaps-in-metric-data](../includes/prometheus-faq-i-see-gaps-in-metric-data.md)] ## Next steps
-* [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md)
+* [Collect Prometheus metrics from AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
* [Azure Monitor workspace](./azure-monitor-workspace-overview.md) * [Use Azure Monitor managed service for Prometheus as data source for Grafana using managed system identity](./prometheus-grafana.md)
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
In the request body, provide a link to your template and parameter file.
- [Metric alert rules](alerts/resource-manager-alerts-metric.md): Configure alerts from metrics that use different kinds of logic. - [Application Insights](app/resource-manager-app-resource.md) - [Diagnostic settings](essentials/resource-manager-diagnostic-settings.md): Create diagnostic settings to forward logs and metrics from different resource types.-- [Enable Prometheus metrics](essentials/prometheus-metrics-enable.md?tabs=resource-manager#enable-prometheus-metric-collection): Install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace.
+- [Enable Prometheus metrics](containers/kubernetes-monitoring-enable.md?tabs=arm#enable-prometheus-and-grafana): Install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace.
- [Log queries](logs/resource-manager-log-queries.md): Create saved log queries in a Log Analytics workspace. - [Log Analytics workspace](logs/resource-manager-workspace.md): Create a Log Analytics workspace and configure a collection of data sources from the Log Analytics agent. - [Workbooks](visualize/resource-manager-workbooks.md): Create workbooks.
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
The composite bar view for a graph with the preceding settings will look like th
## Next steps
-[Get started with Azure Workbooks](workbooks-getting-started.md)
+[Get started with Azure Workbooks](workbooks-overview.md)
azure-monitor Workbooks Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-configurations.md
- Title: Azure Monitor workbooks with custom parameters
-description: Simplify complex reporting with prebuilt and custom parameterized workbooks.
----- Previously updated : 06/21/2023--
-# Workbook configuration options
-
-You can configure workbooks to suit your needs by using the settings in the **Settings** tab. If query or metrics steps display time-based data, more settings are available on the **Advanced settings** tab.
-
-## Workbook settings
-
-Workbook settings have these tabs to help you configure your workbook.
-
-|Settings tab |Description |
-|||
-|Resources|This tab contains the resources that appear as default selections in this workbook.<br>The resource marked as the **Owner** is where the workbook will be saved and the location of the workbooks and templates you'll see when you're browsing. The owner resource can't be removed.<br> You can add a default resource by selecting **Add Resources**. You can remove resources by selecting a resource or several resources and selecting **Remove Selected Resources**. When you're finished adding and removing resources, select **Apply Changes**.|
-|Versions| This tab contains a list of all the available versions of this workbook. Select a version and use the toolbar to compare, view, or restore versions. Previous workbook versions are available for 90 days.<br><ul><li>**Compare**: Compares the JSON of the previous workbook to the most recently saved version.</li><li>**View**: Opens the selected version of the workbook in a context pane.</li><li>**Restore**: Saves a new copy of the workbook with the contents of the selected version and overwrites any existing current content. You'll be prompted to confirm this action.</li></ul><br>|
-|Style |On this tab, you can set a padding and spacing style for the whole workbook. The possible options are **Wide**, **Standard**, **Narrow**, and **None**. The default style setting is **Standard**.|
-|Pin |While in pin mode, you can select **Pin Workbook** to pin a component from this workbook to a dashboard. Select **Link to Workbook** to pin a static link to this workbook on your dashboard. You can choose a specific component in your workbook to pin.|
-|Trusted hosts |On this tab, you can enable a trusted source or mark this workbook as trusted in this browser. For more information, see [Trusted hosts](#trusted-hosts). |
-
-> [!NOTE]
-> Version history isn't available for [bring-your-own-storage](workbooks-bring-your-own-storage.md) workbooks.
-
-#### Versions tab
--
-#### Compare versions
--
-### Trusted hosts
-
-Enable a trusted source or mark this workbook as trusted in this browser.
-
-| Control | Definition |
-| -- | -- |
-| Mark workbook as trusted | If enabled, this workbook can call any endpoint, whether the host is marked as trusted or not. A workbook is trusted if it's a new workbook, an existing workbook that's saved, or is explicitly marked as a trusted workbook. |
-| URL grid | A grid to explicitly add trusted hosts. |
-
-## Time brushing
-
-Time range brushing allows a user to "brush" or "scrub" a range on a chart and have that range output as a parameter value.
--
-You can also choose to only export a parameter when a range is explicitly brushed:
--
-### Brushing in a metrics chart
-
-When you enable time brushing on a metrics chart, you can "brush" a time by dragging the mouse on the time chart.
--
-After the brush has stopped, the metrics chart zooms in to that range and exports the range as a time range parameter.
-An icon on the toolbar in the upper-right corner is active to reset the time range back to its original, unzoomed time range.
-
-### Brushing in a query chart
-
-When you enable time brushing on a query chart, indicators appear that you can drag, or you can brush a range on the time chart.
--
-After the brush has stopped, the query chart shows that range as a time range parameter but won't zoom in. This behavior is different than the behavior of metrics charts. Because of the complexity of user-written queries, it might not be possible for workbooks to correctly update the range used by the query in the query content directly. If the query is using a time range parameter, it's possible to get this behavior by using a [global parameter](workbooks-parameters.md#global-parameters) instead.
-
-An icon on the toolbar in the upper-right corner is active to reset the time range back to its original, unzoomed time range.
-
-## Interactivity
-
-There are several ways that you can create interactive reports and experiences in workbooks:
--
-### Set up a grid row click
-
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Select **Add query** to add a log query control to the workbook.
-1. Select the log query type, the resource type, and the target resources.
-1. Use the query editor to enter the KQL for your analysis:
-
- ```kusto
- requests
- | summarize AllRequests = count(), FailedRequests = countif(success == false) by Request = name
- | order by AllRequests desc
- ```
-
-1. Select **Run query** to see the results.
-1. Select **Advanced Settings** to open the **Advanced Settings** pane.
-1. Select the **When an item is selected, export a parameter** checkbox.
-1. Select **Add Parameter** and fill in the following information:
- - **Field to export**: `Request`
- - **Parameter name**: `SelectedRequest`
- - **Default value**: `All requests`
-
- :::image type="content" source="media/workbooks-configurations/workbooks-export-parameters-add.png" alt-text="Screenshot that shows the Advanced Settings workbook editor with settings for exporting fields as parameters.":::
-
-1. Optional. If you want to export the entire contents of the selected row instead of a specific column, leave **Field to export** unset. The entire row's contents are exported as JSON to the parameter. On the referencing KQL control, use the `todynamic` function to parse the JSON and access the individual columns.
-1. Select **Save**.
-1. Select **Done Editing**.
-1. Add another query control as in the preceding steps.
-1. Use the query editor to enter the KQL for your analysis.
-
- ```kusto
- requests
- | where name == '{SelectedRequest}' or 'All Requests' == '{SelectedRequest}'
- | summarize ['{SelectedRequest}'] = count() by bin(timestamp, 1h)
- ```
-
-1. Select **Run query** to see the results.
-1. Change **Visualization** to **Area chart**.
-1. Choose a row to select in the first grid. Note how the area chart below filters to the selected request.
-
-The resulting report looks like this example in edit mode:
-
- :::image type="content" source="media/workbooks-configurations/workbooks-interactivity-grid-create.png" alt-text="Screenshot that shows workbooks with the first two queries in edit mode.":::
-
-The following image shows a more elaborate interactive report in read mode based on the same principles. The report uses grid clicks to export parameters, which in turn are used in two charts and a text block.
-
- :::image type="content" source="media/workbooks-configurations/workbooks-interactivity-grid-read.png" alt-text="Screenshot that shows a workbook report using grid clicks.":::
-
-### Set up grid cell clicks
-
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Select **Add query** to add a log query control to the workbook.
-1. Select the log query type, resource type, and target resources.
-1. Use the query editor to enter the KQL for your analysis:
-
- ```kusto
- requests
- | summarize Count = count(), Sample = any(pack_all()) by Request = name
- | order by Count desc
- ```
-
-1. Select **Run query** to see the results.
-1. Select **Column Settings** to open the settings pane.
-1. In the **Columns** section, set:
- - **Sample**
- - **Column renderer**: `Link`
- - **View to open**: `Cell Details`
- - **Link label**: `Sample`
- - **Count**
- - **Column renderer**: `Bar`
- - **Color palette**: `Blue`
- - **Minimum value**: `0`
- - **Request**
- - **Column renderer**: `Automatic`
-1. Select **Save and Close** to apply changes.
-
- :::image type="content" source="media/workbooks-configurations/workbooks-column-settings.png" alt-text="Screenshot that shows the Edit column settings pane.":::
-
-1. Select a **Sample** link in the grid to open a pane with the details of a sampled request.
-
- :::image type="content" source="media/workbooks-configurations/workbooks-grid-link-details.png" alt-text="Screenshot that shows the Details pane of the sample request.":::
-
-### Link renderer actions
-
-Learn about how [link actions](workbooks-link-actions.md) work to enhance workbook interactivity.
-
-### Set conditional visibility
-
-1. Follow the steps in the [Set up a grid row click](#set-up-a-grid-row-click) section to set up two interactive controls.
-1. Add a new parameter with these values:
- - **Parameter name**: `ShowDetails`
- - **Parameter type**: `Drop down`
- - **Required**: `checked`
- - **Get data from**: `JSON`
- - **JSON Input**: `["Yes", "No"]`
-1. Select **Save** to commit changes.
-
- :::image type="content" source="media/workbooks-configurations/workbooks-edit-parameter.png" alt-text="Screenshot that shows editing an interactive parameter in workbooks.":::
-
-1. Set the parameter value to `Yes`.
-
- :::image type="content" source="media/workbooks-configurations/workbooks-set-parameter.png" alt-text="Screenshot that shows setting an interactive parameter value in a workbook.":::
-
-1. In the query control with the area chart, select **Advanced Settings** (the gear icon).
-1. If **ShowDetails** is set to `Yes`, select **Make this item conditionally visible**.
-1. Select **Done Editing** to commit the changes.
-1. On the workbook toolbar, select **Done Editing**.
-1. Switch the value of **ShowDetails** to `No`. Notice that the chart below disappears.
-
-The following image shows the case where **ShowDetails** is `Yes`:
-
- :::image type="content" source="media/workbooks-configurations/workbooks-conditional-visibility-visible.png" alt-text="Screenshot that shows a workbook with a conditional component that's visible.":::
-
-The following image shows the hidden case where **ShowDetails** is `No`:
--
-### Set up multi-selects in grids and charts
-
-Query and metrics components can export parameters when a row or multiple rows are selected.
--
-1. In the query component that displays the grid, select **Advanced settings**.
-1. Select the **When items are selected, export parameters** checkbox.
-1. Select the **Allow selection of multiple values** checkbox.
- - The displayed visualization allows multi-selecting and the exported parameter's values will be arrays of values, like when using multi-select dropdown parameters.
- - If cleared, the display visualization only captures the last selected item and exports only a single value at a time.
-1. Use **Add Parameter** for each parameter you want to export. A pop-up window appears with the settings for the parameter to be exported.
-
-When you enable single selection, you can specify which field of the original data to export. Fields include parameter name, parameter type, and default value to use if nothing is selected.
-
-When you enable multi-selection, you specify which field of the original data to export. Fields include parameter name, parameter type, quote with, and delimiter. The quote with and delimiter values are used when turning arrow values into text when they're being replaced in a query. In multi-selection, if no values are selected, the default value is an empty array.
-
-> [!NOTE]
-> For multi-selection, only unique values are exported. For example, you won't see output array values like "1,1,2,1". The array output will be "1,2".
-
-If you leave the **Field to export** setting empty in the export settings, all the available fields in the data will be exported as a stringified JSON object of key:value pairs. For grids and titles, the string includes the fields in the grid. For charts, the available fields are x,y,series, and label, depending on the type of chart.
-
-While the default behavior is to export a parameter as text, if you know the field is a subscription or resource ID, use that information as the export parameter type. Then the parameter can be used downstream in places that require those types of parameters.
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
Title: Creating an Azure Workbook
+ Title: Create or edit an Azure Workbook
description: Learn how to create an Azure Workbook. Previously updated : 06/21/2023- Last updated : 01/08/2024+
-# Creating an Azure Workbook
+# Create or edit an Azure Workbook
This article describes how to create a new workbook and how to add elements to your Azure Workbook. This video walks you through creating workbooks.
To create a new Azure workbook:
## Add text
-Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
-
+Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the data, information to help users interpret the data, section headings, etc.
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" lightbox="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
-Text is added through a markdown control into which an author can add their content. An author can use the full formatting capabilities of markdown. These include different heading and font styles, hyperlinks, tables, etc. Markdown allows authors to create rich Word- or Portal-like reports or analytic narratives. Text can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
+Text is added through a markdown control into which an author can add their content. An author can use the full formatting capabilities of markdown. These include different heading and font styles, hyperlinks, tables, etc. Markdown allows authors to create rich Word- or Portal-like reports or analytic narratives. Text can contain parameter values in the markdown text, and those parameter references are updated as the parameters change.
**Edit mode**: <!-- convertborder later -->
To add text to an Azure workbook:
- Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add text**. 1. Enter markdown text into the editor field. 1. Use the **Text Style** option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
-
+ > [!TIP] > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
-1. Use the **Preview** tab to see how your content will look. The preview shows the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, without a scrollbar.
+1. Use the **Preview** tab to see how your content looks. The preview shows the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content expands to fill whatever space it needs, without a scrollbar.
1. Select **Done Editing**. ### Text styles
You can also choose a text parameter as the source of the style. The parameter v
:::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" lightbox="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style." border="false"::: **Warning style example**:
-
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" lightbox="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style."::: ## Add queries
-Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md).
+Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md).
For example, you can query Azure Resource Health to help you view any service problems affecting your resources. You can also query Azure Monitor metrics, which is numeric data collected at regular intervals. Azure Monitor metrics provide information about an aspect of a system at a particular time.
To add a query to an Azure Workbook:
1. When you're sure you have the query you want in your workbook, select **Done editing**.
-### Best practices for querying logs
+### Best practices for querying logs
- **Use the smallest possible time ranges.** The longer the time ranges, the slower the queries, and the more data returned. For longer time ranges, the query might have to go to slower "cold" storage, making the query even slower. Default to the shortest useful time range, but allow the user to pick a larger time range that may be slower. - **Use the "All" special value in dropdowns.** You can add an **All** special item in the dropdown parameter settings. You can use a special value. Using an **All** special item correctly can dramatically simplify queries.
+ - **Protect against missing columns.** If you're using a custom table or custom columns, design your template so that it works if the column is missing in a workspace. See the [column_ifexists](/azure/kusto/query/columnifexists) function.
+ - **Protect against a missing table.** If your template is installed as part of a solution, or in other cases where the tables are guaranteed to exist, checking for missing columns is unnecessary. If you're creating generic templates that could be visible on any resource or workspace, it's a good idea to protect for tables that don't exist.
The log analytics query language doesn't have a **table_ifexists** function like the function for testing for columns. However, there are some ways to check if a table exists. For example, you can use a [fuzzy union](/azure/kusto/query/unionoperator?pivots=azuredataexplorer). When doing a union, you can use the **isfuzzy=true** setting to let the union continue if some of the tables don't exist. You can add a parameter query in your workbook that checks for existence of the table, and hides some content if it doesn't. Items that aren't visible aren't run, so you can design your template so that other queries in the workbook that would fail if the table doesn't exist, don't run until after the test verifies that the table exists. For example: ``` let MissingTable = view () { print isMissing=1 };
- union isfuzzy=true MissingTable, (AzureDiagnostics | getschema | summarize c=count() | project isMissing=iff(c > 0, 0, 1))
+ union isfuzzy=true MissingTable, (AzureDiagnostics | getschema | summarize c=count() | project isMissing=iff(c > 0, 0, 1))
| top 1 by isMissing asc ```
- This query returns a **1** if the **AzureDiagnostics** table doesn't exist in the workspace. If the real table doesn't exist, the fake row of the **MissingTable** will be returned. If any columns exist in the schema for the **AzureDiagnostics** table, a **0** is returned. You could use this as a parameter value, and conditionally hide your query steps unless the parameter value is 0. You could also use conditional visibility to show text that says that the current workspace does not have the missing table, and send the user to documentation on how to onboard.
+ This query returns a **1** if the **AzureDiagnostics** table doesn't exist in the workspace. If the real table doesn't exist, the fake row of the **MissingTable** will be returned. If any columns exist in the schema for the **AzureDiagnostics** table, a **0** is returned. You could use this as a parameter value, and conditionally hide your query steps unless the parameter value is 0. You could also use conditional visibility to show text that says that the current workspace doesn't have the missing table, and send the user to documentation on how to onboard.
Instead of hiding steps, you may just want to have no rows as a result. You can change the **MissingTable** to be an empty data table with the appropriate matching schema:
-
+ ``` let MissingTable = datatable(ResourceId: string) []; union isfuzzy=true MissingTable, (AzureDiagnostics
This video shows you how to use resource level logs queries in Azure Workbooks.
[![Making resource centric log queries in workbooks](https://img.youtube.com/vi/8CvjM0VvOA8/0.jpg)](https://youtu.be/8CvjM0VvOA8 "Video showing how to make resource centric log queries in workbooks") #### Dynamic resource type parameter
-Uses dynamic scopes for more efficient querying. The snippet below uses this heuristc:
+Uses dynamic scopes for more efficient querying. The snippet below uses this heuristic:
1. _Individual resources_: if the count of selected resource is less than or equal to 5 2. _Resource groups_: if the number of resources is over 5 but the number of resource groups the resources belong to is less than or equal to 3 3. _Subscriptions_: otherwise
Resources
| project x = dynamic(["microsoft.compute/virtualmachines", "microsoft.compute/virtualmachinescalesets", "microsoft.resources/resourcegroups", "microsoft.resources/subscriptions"]) | mvexpand x to typeof(string) | extend jkey = 1
-| join kind = inner (Resources
+| join kind = inner (Resources
| where id in~ ({VirtualMachines}) | summarize Subs = dcount(subscriptionId), resourceGroups = dcount(resourceGroup), resourceCount = count() | extend jkey = 1) on jkey
-| project x, label = 'x',
+| project x, label = 'x',
selected = case(
- x in ('microsoft.compute/virtualmachinescalesets', 'microsoft.compute/virtualmachines') and resourceCount <= 5, true,
- x == 'microsoft.resources/resourcegroups' and resourceGroups <= 3 and resourceCount > 5, true,
- x == 'microsoft.resources/subscriptions' and resourceGroups > 3 and resourceCount > 5, true,
+ x in ('microsoft.compute/virtualmachinescalesets', 'microsoft.compute/virtualmachines') and resourceCount <= 5, true,
+ x == 'microsoft.resources/resourcegroups' and resourceGroups <= 3 and resourceCount > 5, true,
+ x == 'microsoft.resources/subscriptions' and resourceGroups > 3 and resourceCount > 5, true,
false) ``` #### Static resource scope for querying multiple resource types
Resources
#### Resource parameter grouped by resource type ``` Resources
-| where type =~ 'microsoft.compute/virtualmachines' or type =~ 'microsoft.compute/virtualmachinescalesets'
-| where resourceGroup in~({ResourceGroups})
-| project value = id, label = id, selected = false,
- group = iff(type =~ 'microsoft.compute/virtualmachines', 'Virtual machines', 'Virtual machine scale sets')
+| where type =~ 'microsoft.compute/virtualmachines' or type =~ 'microsoft.compute/virtualmachinescalesets'
+| where resourceGroup in~({ResourceGroups})
+| project value = id, label = id, selected = false,
+ group = iff(type =~ 'microsoft.compute/virtualmachines', 'Virtual machines', 'Virtual machine scale sets')
``` ## Add parameters You can collect input from consumers and reference it in other parts of the workbook using parameters. Use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences. For more information on how parameters can be used, see [workbook parameters](workbooks-parameters.md).
-Workbooks allow you to control how your parameter controls are presented to consumers ΓÇô text box vs. drop down, single- vs. multi-select, values from text, JSON, KQL, or Azure Resource Graph, etc.
+Workbooks allow you to control how your parameter controls are presented to consumers ΓÇô text box vs. drop down, single- vs. multi-select, values from text, JSON, KQL, or Azure Resource Graph, etc.
Watch this video to learn how to use parameters and log data in Azure Workbooks. > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE59Wee]
To add a parameter to an Azure Workbook:
- Parameter name: Parameter names can't include spaces or special characters - Display name: Display names can include spaces, special characters, emoji, etc.
- - Parameter type:
- - Required:
-
+ - Parameter type:
+ - Required:
+ 1. Select **Done editing**. <!-- convertborder later --> :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" lightbox="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter." border="false"::: ## Add metric charts
-Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Using workbooks, you can create visualizations of the metric data as time-series charts.
+Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Using workbooks, you can create visualizations of the metric data as time-series charts.
+
+The example below shows the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
-The example below shows the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
-
:::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" lightbox="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot showing a metric area chart for storage transactions in a workbook."::: To add a metric chart to an Azure Workbook:
This is a metric chart in edit mode:
<!-- convertborder later --> :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" lightbox="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot showing a metric scatter chart for storage latency." border="false":::
-## Add links
+## Add links
-You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
+You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
<!-- convertborder later --> :::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" lightbox="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook." border="false":::
Links can use all of the link actions available in [link actions](workbooks-link
| Action | Description | |:- |:-| |Set a parameter value | A parameter can be set to a value when selecting a link, button, or tab. Tabs are often configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value.|
-|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be used to create a "table of contents", or a "go back to the top" style experience. |
+|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be used to create a "table of contents", or a "go back to the top" style experience.|
### Tabs
-Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
+Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab sets a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
<!-- convertborder later --> :::image type="content" source="media/workbooks-create-workbook/workbooks-creating-tabs.png" lightbox="media/workbooks-create-workbook/workbooks-creating-tabs.png" alt-text="Screenshot of creating tabs in workbooks." border="false":::
You can then add other items in the workbook that are conditionally visible if t
<!-- convertborder later --> :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab.png" lightbox="media/workbooks-create-workbook/workbooks-selected-tab.png" alt-text="Screenshot of conditionally visible tab in workbooks." border="false":::
-The first tab is selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:
-
+The first tab is selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab changes the value of the parameter to "2", and different content is displayed:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab2.png" lightbox="media/workbooks-create-workbook/workbooks-selected-tab2.png" alt-text="Screenshot of workbooks with content displayed when selected tab is 2."::: A sample workbook with the above tabs is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-links).
g
- URL links aren't supported in tabs. A URL link in a tab appears as a disabled tab. - No item styling is supported in tabs. Items are displayed as tabs, and only the tab name (link text) field is displayed. Fields that aren't used in tab style are hidden while in edit mode.
+ - The first tab is selected by default, invoking whatever action that tab specified. If the first tab's action opens another view, as soon as the tabs are created, a view appears.
+ - You can use tabs to open another views, but this functionality should be used sparingly, since most users won't expect to navigate by selecting a tab. If other tabs are setting a parameter to a specific value, a tab that opens a view wouldn't change that value, so the rest of the workbook content continues to show the view or data for the previous tab.
### Toolbars
Use the Toolbar style to have your links appear styled as a toolbar. In toolbar
<!-- convertborder later --> :::image type="content" source="media/workbooks-create-workbook/workbooks-links-create-toolbar.png" lightbox="media/workbooks-create-workbook/workbooks-links-create-toolbar.png" alt-text="Screenshot of creating links styled as a toolbar in workbooks." border="false":::
-If any required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset, the toolbar button will be disabled. For example, this can be used to disable toolbar buttons when no value is selected in another parameter/control.
+If any required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset, the toolbar button is disabled. For example, this can be used to disable toolbar buttons when no value is selected in another parameter/control.
A sample workbook with toolbars, globals parameters, and ARM Actions is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
To add a group to your workbook:
1. Select items for your group. 1. Select **Done editing.**
- This is a group in read mode with two items inside: a text item and a query item.
+ This is a group in read mode with two items inside: a text item and a query item.
<!-- convertborder later --> :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-view.png" lightbox="media/workbooks-create-workbook/workbooks-groups-view.png" alt-text="Screenshot showing a group in read mode in a workbook." border="false":::
A group is treated as a new scope in the workbook. Any parameters created in the
You can specify which type of group to add to your workbook. There are two types of groups: - **Editable**: The group in the workbook allows you to add, remove, or edit the contents of the items in the group. This is most commonly used for layout and visibility purposes.
+ - **From a template**: The group in the workbook loads from the contents of another workbook by its ID. The content of that workbook is loaded and merged into the workbook at runtime. In edit mode, you can't modify any of the contents of the group, as they'll just load again from the template next time the item loads. When loading a group from a template, use the full Azure Resource ID of an existing workbook.
### Load types
For groups created from a template, the content of the template isn't retrieved
#### Explicit loading
-In this mode, a button is displayed where the group would be, and no content is retrieved or created until the user explicitly clicks the button to load the content. This is useful in scenarios where the content might be expensive to compute or rarely used. The author can specify the text to appear on the button.
+In this mode, a button is displayed where the group would be, and no content is retrieved or created until the user explicitly selects the button to load the content. This is useful in scenarios where the content might be expensive to compute or rarely used. The author can specify the text to appear on the button.
This screenshot shows explicit load settings with a configured "Load more" button. <!-- convertborder later -->
This is the group before being loaded in the workbook:
The group after being loaded in the workbook:
-
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" lightbox="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" alt-text="Screenshot showing an explicit group after being loaded in the workbook."::: #### Always mode
In **Always** mode, the content of the group is always loaded and created as soo
### Using templates inside a group
-When a group is configured to load from a template, by default, that content will be loaded in lazy mode, and it will only load when the group is visible.
+When a group is configured to load from a template, by default, that content is loaded in lazy mode, and it will only load when the group is visible.
-When a template is loaded into a group, the workbook attempts to merge any parameters declared in the template with parameters that already exist in the group. Any parameters that already exist in the workbook with identical names will be merged out of the template being loaded. If all parameters in a parameter step are merged out, the entire parameters step will disappear.
+When a template is loaded into a group, the workbook attempts to merge any parameters declared in the template with parameters that already exist in the group. Any parameters that already exist in the workbook with identical names will be merged out of the template being loaded. If all parameters in a parameter step are merged out, the entire parameters step disappears.
#### Example 1: All parameters have identical names
If the loaded template had contained **TimeRange** and **Filter** (instead of **
### Splitting a large template into many templates
-To improve performance, it's helpful to break up a large template into multiple smaller templates that loads some content in lazy mode or on demand by the user. This makes the initial load faster since the top-level template can be much smaller.
+To improve performance, it's helpful to break up a large template into multiple smaller templates that loads some content in lazy mode or on demand by the user. This makes the initial load faster since the top-level template can be smaller.
When splitting a template into parts, you'll basically need to split the template into many templates (subtemplates) that all work individually. If the top-level template has a **TimeRange** parameter that other items use, the subtemplate will need to also have a parameters item that defines a parameter with same exact name. The subtemplates will work independently and can load inside larger templates in groups.
To turn a larger template into multiple subtemplates:
1. Create a new empty group near the top of the workbook, after the shared parameters. This new group will eventually become a subtemplate. 1. Create a copy of the shared parameters step, and then use **move into group** to move the copy into the group created in step 1. This parameter allows the subtemplate to work independently of the outer template, and will get merged out when loaded inside the outer template.
-
+ > [!NOTE] > Subtemplates don't technically need to have the parameters that get merged out if you never plan on the sub-templates being visible by themselves. However, if the sub-templates do not have the parameters, it will make them very hard to edit or debug if you need to do so later. 1. Move each item in the workbook you want to be in the subtemplate into the group created in step 1.
-1. If the individual steps moved in step 3 had conditional visibilities, that will become the visibility of the outer group (like used in tabs). Remove them from the items inside the group and add that visibility setting to the group itself. Save here to avoid losing changes and/or export and save a copy of the json content.
-1. If you want that group to be loaded from a template, you can use the **Edit** toolbar button in the group. This will open just the content of that group as a workbook in a new window. You can then save it as appropriate and close this workbook view (don't close the browser, just that view to go back to the previous workbook you were editing).
+1. If the individual steps moved in step 3 had conditional visibilities, that becomes the visibility of the outer group (like used in tabs). Remove them from the items inside the group and add that visibility setting to the group itself. Save here to avoid losing changes and/or export and save a copy of the json content.
+1. If you want that group to be loaded from a template, you can use the **Edit** toolbar button in the group. This opens just the content of that group as a workbook in a new window. You can then save it as appropriate and close this workbook view (don't close the browser, just that view to go back to the previous workbook you were editing).
1. You can then change the group step to load from template and set the template ID field to the workbook/template you created in step 5. To work with workbooks IDs, the source needs to be the full Azure Resource ID of a shared workbook. Press *Load* and the content of that group will now be loaded from that subtemplate instead of saved inside this outer workbook.++
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
With [Azure Monitor managed service for Prometheus](../essentials/prometheus-met
## Next steps
+ - [Get started with Azure Workbooks](workbooks-overview.md)
- [Create an Azure workbook](workbooks-create-workbook.md)
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
Title: Azure Monitor workbook dropdown parameters
-description: Simplify complex reporting with prebuilt and custom parameterized workbooks containing dropdown parameters.
+description: Use dropdown parameters to simplify complex reporting with prebuilt and custom parameterized workbooks.
Other common examples use '*' as the special marker value when a parameter is re
## Next steps
-[Getting started with Azure Workbooks](workbooks-getting-started.md)
+[Learn about the types of visualizations you can use to create rich visual reports with Azure Workbooks](workbooks-visualizations.md).
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
- Title: Common Azure Workbooks tasks
-description: Learn how to perform the commonly used tasks in workbooks.
--- Previously updated : 06/21/2023---
-# Get started with Azure Workbooks
-
-This article describes how to access Azure Workbooks and the common tasks used to work with workbooks.
-
-You can access Azure Workbooks in a few ways:
--- In the [Azure portal](https://portal.azure.com), select **Monitor** > **Workbooks** from the menu bars on the left.-
- :::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot that shows Workbooks in the menu.":::
--- On a **Log Analytics workspaces** page, select **Workbooks** at the top of the page.-
- :::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks on the Log Analytics workspaces page.":::
-
-When the gallery opens, select a saved workbook or a template. You can also search for a name in the search box.
-
-## Save a workbook
-
-To save a workbook, save the report with a specific title, subscription, resource group, and location.
-
-By default, the workbook is auto-filled with the same settings as the LA workspace, with the same subscription and resource group. Workbooks are saved to 'My Reports' by default, and are only accessible by the individual user, but they can be saved directly to shared reports or shared later on. Workbooks are shared resources and they require write access to the parent resource group to be saved.
-
-## Share a workbook
-
-When you want to share a workbook or template, keep in mind that the person you want to share with must have permissions to access the workbook. They must have an Azure account, and **Monitoring Reader** permissions.
-To share a workbook or workbook template:
-
-1. In the Azure portal, select the workbook or template you want to share.
-1. Select the **Share** icon from the top toolbar.
-1. The **Share workbook** or **Share template** window opens with a URL to use for sharing the workbook.
-1. Copy the link to share the workbook, or select **Share link via email** to open your default mail app.
--
-## Pin a visualization
-
-You can pin text, query, or metrics components in a workbook by using the **Pin** button on those items while the workbook is in pin mode. Or you can use the **Pin** button if the workbook author has enabled settings for that element to make it visible.
-
-To access pin mode, select **Edit** to enter editing mode. Select **Pin** on the top bar. An individual **Pin** then appears above each corresponding workbook part's **Edit** button on the right side of the screen.
--
-> [!NOTE]
-> The state of the workbook is saved at the time of the pin. Pinned workbooks on a dashboard won't update if the underlying workbook is modified. To update a pinned workbook part, you must delete and re-pin that part.
-
-### Time ranges for pinned queries
-
-Pinned workbook query parts will respect the dashboard's time range if the pinned item is configured to use a *TimeRange* parameter. The dashboard's time range value will be used as the time range parameter's value. Any change of the dashboard time range will cause the pinned item to update. If a pinned part is using the dashboard's time range, you'll see the subtitle of the pinned part update to show the dashboard's time range whenever the time range changes.
-
-Pinned workbook parts using a time range parameter will auto-refresh at a rate determined by the dashboard's time range. The last time the query ran will appear in the subtitle of the pinned part.
-
-If a pinned component has an explicitly set time range and doesn't use a time range parameter, that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part won't show the dashboard's time range. The query won't auto-refresh on the dashboard. The subtitle will show the last time the query executed.
-
-> [!NOTE]
-> Queries that use the *merge* data source aren't currently supported when pinning to dashboards.
-
-## Auto refresh
-
-Select **Auto refresh** to open a list of intervals that you can use to select the interval. The workbook will keep refreshing after the selected time interval.
-
-* **Auto refresh** only refreshes when the workbook is in read mode. If a user sets an interval of 5 minutes and after 4 minutes switches to edit mode, refreshing doesn't occur if the user is still in edit mode. But if the user returns to read mode, the interval of 5 minutes resets and the workbook will be refreshed after 5 minutes.
-* Selecting **Auto refresh** in read mode also resets the interval. If a user sets the interval to 5 minutes and after 3 minutes the user selects **Auto refresh** to manually refresh the workbook, the **Auto refresh** interval resets and the workbook will be auto-refreshed after 5 minutes.
-* This setting isn't saved with the workbook. Every time a user opens a workbook, **Auto refresh** is **Off** and needs to be set again.
-* Switching workbooks and going out of the gallery clears the **Auto refresh** interval.
---
-## Next steps
-
-[Azure Workbooks data sources](workbooks-data-sources.md)
azure-monitor Workbooks Interactive Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-interactive-reports.md
+
+ Title: Create interactive reports with Azure Monitor Workbooks
+description: This article explains how to create interactive reports in Azure Workbooks.
+ Last updated : 01/08/2024++++
+# Create interactive reports with Azure Monitor Workbooks
+
+There are several ways that you can create interactive reports and experiences in workbooks:
+
+ - **Parameters**: When you update a [parameter](workbooks-parameters.md), any control that uses the parameter automatically refreshes and redraws to reflect the new value. This behavior is how most of the Azure portal reports support interactivity. Workbooks provide this functionality in a straightforward manner with minimal user effort.
+ - **Grid, tile, and chart selections**: You can construct scenarios where selecting a row in a grid updates subsequent charts based on the content of the row. For example, you might have a grid that shows a list of requests and some statistics like failure counts. You can set it up so that if you select the row of a request, the detailed charts below update to show only that request. Learn how to [set up a grid row click](#set-up-a-grid-row-click).
+ - **Grid cell clicks**: You can add interactivity with a special type of grid column renderer called a [link renderer](#link-renderer-actions). A link renderer converts a grid cell into a hyperlink based on the contents of the cell. Workbooks support many kinds of link renderers including renderers that open resource overview panes, property bag viewers, and Application Insights search, usage, and transaction tracing. Learn how to [set up a grid cell click](#set-up-grid-cell-clicks).
+ - **Conditional visibility**: You can make controls appear or disappear based on the values of parameters. This way you can have reports that look different based on user input or telemetry state. For example, you can show consumers a summary when there are no issues. You can also show detailed information when there's something wrong. Learn how to [set up conditional visibility](#set-conditional-visibility).
+ - **Export parameters with multi-selections**: You can export parameters from query and metrics workbook components when a row or multiple rows are selected. Learn how to [set up multi-selects in grids and charts](#set-up-multi-selects-in-grids-and-charts).
+
+## Set up a grid row click
+
+1. Make sure you're in edit mode by selecting **Edit**.
+1. Select **Add query** to add a log query control to the workbook.
+1. Select the log query type, the resource type, and the target resources.
+1. Use the query editor to enter the KQL for your analysis:
+
+ ```kusto
+ requests
+ | summarize AllRequests = count(), FailedRequests = countif(success == false) by Request = name
+ | order by AllRequests desc
+ ```
+
+1. Select **Run query** to see the results.
+1. Select **Advanced Settings** to open the **Advanced Settings** pane.
+1. Select the **When an item is selected, export a parameter** checkbox.
+1. Select **Add Parameter** and fill in the following information:
+ - **Field to export**: `Request`
+ - **Parameter name**: `SelectedRequest`
+ - **Default value**: `All requests`
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-export-parameters-add.png" alt-text="Screenshot that shows the Advanced Settings workbook editor with settings for exporting fields as parameters.":::
+
+1. Optional. If you want to export the entire contents of the selected row instead of a specific column, leave **Field to export** unset. The entire row's contents are exported as JSON to the parameter. On the referencing KQL control, use the `todynamic` function to parse the JSON and access the individual columns.
+1. Select **Save**.
+1. Select **Done Editing**.
+1. Add another query control as in the preceding steps.
+1. Use the query editor to enter the KQL for your analysis.
+
+ ```kusto
+ requests
+ | where name == '{SelectedRequest}' or 'All Requests' == '{SelectedRequest}'
+ | summarize ['{SelectedRequest}'] = count() by bin(timestamp, 1h)
+ ```
+
+1. Select **Run query** to see the results.
+1. Change **Visualization** to **Area chart**.
+1. Choose a row to select in the first grid. Note how the area chart below filters to the selected request.
+
+The resulting report looks like this example in edit mode:
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-interactivity-grid-create.png" alt-text="Screenshot that shows workbooks with the first two queries in edit mode.":::
+
+The following image shows a more elaborate interactive report in read mode based on the same principles. The report uses grid clicks to export parameters, which in turn are used in two charts and a text block.
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-interactivity-grid-read.png" alt-text="Screenshot that shows a workbook report using grid clicks.":::
+
+## Set up grid cell clicks
+
+1. Make sure you're in edit mode by selecting **Edit**.
+1. Select **Add query** to add a log query control to the workbook.
+1. Select the log query type, resource type, and target resources.
+1. Use the query editor to enter the KQL for your analysis:
+
+ ```kusto
+ requests
+ | summarize Count = count(), Sample = any(pack_all()) by Request = name
+ | order by Count desc
+ ```
+
+1. Select **Run query** to see the results.
+1. Select **Column Settings** to open the settings pane.
+1. In the **Columns** section, set:
+ - **Sample**
+ - **Column renderer**: `Link`
+ - **View to open**: `Cell Details`
+ - **Link label**: `Sample`
+ - **Count**
+ - **Column renderer**: `Bar`
+ - **Color palette**: `Blue`
+ - **Minimum value**: `0`
+ - **Request**
+ - **Column renderer**: `Automatic`
+1. Select **Save and Close** to apply changes.
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-column-settings.png" alt-text="Screenshot that shows the Edit column settings pane.":::
+
+1. Select a **Sample** link in the grid to open a pane with the details of a sampled request.
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-grid-link-details.png" alt-text="Screenshot that shows the Details pane of the sample request.":::
+
+## Link renderer actions
+
+Learn about how [link actions](workbooks-link-actions.md) work to enhance workbook interactivity.
+
+## Set conditional visibility
+
+1. Follow the steps in the [Set up a grid row click](#set-up-a-grid-row-click) section to set up two interactive controls.
+1. Add a new parameter with these values:
+ - **Parameter name**: `ShowDetails`
+ - **Parameter type**: `Drop down`
+ - **Required**: `checked`
+ - **Get data from**: `JSON`
+ - **JSON Input**: `["Yes", "No"]`
+1. Select **Save** to commit changes.
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-edit-parameter.png" alt-text="Screenshot that shows editing an interactive parameter in workbooks.":::
+
+1. Set the parameter value to `Yes`.
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-set-parameter.png" alt-text="Screenshot that shows setting an interactive parameter value in a workbook.":::
+
+1. In the query control with the area chart, select **Advanced Settings** (the gear icon).
+1. If **ShowDetails** is set to `Yes`, select **Make this item conditionally visible**.
+1. Select **Done Editing** to commit the changes.
+1. On the workbook toolbar, select **Done Editing**.
+1. Switch the value of **ShowDetails** to `No`. Notice that the chart below disappears.
+
+The following image shows the case where **ShowDetails** is `Yes`:
+
+ :::image type="content" source="media/workbooks-configurations/workbooks-conditional-visibility-visible.png" alt-text="Screenshot that shows a workbook with a conditional component that's visible.":::
+
+The following image shows the hidden case where **ShowDetails** is `No`:
++
+## Set up multi-selects in grids and charts
+
+Query and metrics components can export parameters when a row or multiple rows are selected.
++
+1. In the query component that displays the grid, select **Advanced settings**.
+1. Select the **When items are selected, export parameters** checkbox.
+1. Select the **Allow selection of multiple values** checkbox.
+ - The displayed visualization allows multi-selecting and the exported parameter's values will be arrays of values, like when using multi-select dropdown parameters.
+ - If cleared, the display visualization only captures the last selected item and exports only a single value at a time.
+1. Use **Add Parameter** for each parameter you want to export. A pop-up window appears with the settings for the parameter to be exported.
+
+When you enable single selection, you can specify which field of the original data to export. Fields include parameter name, parameter type, and default value to use if nothing is selected.
+
+When you enable multi-selection, you specify which field of the original data to export. Fields include parameter name, parameter type, quote with, and delimiter. The quote with and delimiter values are used when turning arrow values into text when they're being replaced in a query. In multi-selection, if no values are selected, the default value is an empty array.
+
+> [!NOTE]
+> For multi-selection, only unique values are exported. For example, you won't see output array values like "1,1,2,1". The array output will be "1,2".
+
+If you leave the **Field to export** setting empty in the export settings, all the available fields in the data will be exported as a stringified JSON object of key:value pairs. For grids and titles, the string includes the fields in the grid. For charts, the available fields are x,y,series, and label, depending on the type of chart.
+
+While the default behavior is to export a parameter as text, if you know the field is a subscription or resource ID, use that information as the export parameter type. Then the parameter can be used downstream in places that require those types of parameters.
+
+## Capture user input to use in a query
+
+You can capture user input by using dropdown lists and use the selections in your queries. For example, you can have a dropdown list to accept a set of virtual machines and then filter your KQL to include just the selected machines. In most cases, this step is as simple as including the parameter's value in the query:
+
+```sql
+ Perf
+ | where Computer in ({Computers})
+ | take 5
+```
+
+In more advanced scenarios, you might need to transform the parameter results before they can be used in queries. Take this OData filter payload:
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'Android' or OSFamily eq 'OS X') and (ComplianceState eq 'Compliant')"
+}
+```
+
+The following example shows how to enable this scenario. Let's say you want the values of the `OSFamily` and `ComplianceState` filters to come from dropdown lists in the workbook. The filter could include multiple values as in the preceding `OsFamily` case. It needs to also support the case where you want to include all dimension values, that is to say, with no filters.
+
+### Set up parameters
+
+1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-parameters).
+1. Select **Add parameter** to create a new parameter. Use the following settings:
+ - **Parameter name**: `OsFilter`
+ - **Display name**: `Operating system`
+ - **Parameter type**: `drop-down`
+ - **Allow multiple selections**: `Checked`
+ - **Delimiter**: `or` (with spaces before and after)
+ - **Quote with**: `<empty>`
+ - **Get data from**: `JSON`
+ - **JSON Input**:
+
+ ```json
+ [
+ { "value": "OSFamily eq 'Android'", "label": "Android" },
+ { "value": "OSFamily eq 'OS X'", "label": "OS X" }
+ ]
+ ```
+
+ - In the **Include in the drop down** section:
+ - Select the **All** checkbox.
+ - **Select All value**: `OSFamily ne '#@?'`
+ - Select **Save** to save this parameter.
+1. Add another parameter with these settings:
+ - **Parameter name**: `ComplianceStateFilter`
+ - **Display name**: `Compliance State`
+ - **Parameter type**: `drop-down`
+ - **Allow multiple selections**: `Checked`
+ - **Delimiter**: `or` (with spaces before and after)
+ - **Quote with**: `<empty>`
+ - **Get data from**: `JSON`
+ - **JSON Input**:
+
+ ```json
+ [
+ { "value": "ComplianceState eq 'Compliant'", "label": "Compliant" },
+ { "value": "ComplianceState eq 'Non-compliant'", "label": "Non compliant" }
+ ]
+ ```
+ - In the **Include in the drop down** section:
+ - Select the **All** checkbox.
+ - **Select All value**: `ComplianceState ne '#@?'`
+ - Select **Save** to save this parameter.
+
+1. Select **Add text** to add a text block. In the **Markdown text to display** block, add:
+
+ ```json
+ {
+ "name": "deviceComplianceTrend",
+ "filter": "({OsFilter}) and ({ComplianceStateFilter})"
+ }
+ ```
+
+ This screenshot shows the parameter settings:
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-odata-parameters-settings.png" alt-text="Screenshot that shows parameter settings for dropdown lists with parameter values.":::
+
+### Single filter value
+
+The simplest case is the selection of a single filter value in each of the dimensions. The dropdown control uses the JSON input field's value as the parameter's value.
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'OS X') and (ComplianceState eq 'Compliant')"
+}
+```
++
+### Multiple filter values
+
+If you choose multiple filter values, for example, both Android and OS X operating systems, the `Delimiter` and `Quote with` parameter settings kick in and produce this compound filter:
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'OS X' or OSFamily eq 'Android') and (ComplianceState eq 'Compliant')"
+}
+```
++
+### No filter case
+
+Another common case is having no filter for that dimension. This scenario is equivalent to including all values of the dimensions as part of the result set. The way to enable it is by having an `All` option on the dropdown and have it return a filter expression that always evaluates to `true`. An example is _ComplianceState eq '#@?'_.
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'OS X' or OSFamily eq 'Android') and (ComplianceState ne '#@?')"
+}
+```
++
+## Reuse query data in different visualizations
+
+There are times where you want to visualize the underlying dataset in different ways without having to pay the cost of the query each time. This sample shows you how to do so by using the `Merge` option in the query control.
+
+### Set up the parameters
+
+1. [Create a new empty workbook](workbooks-create-workbook.md).
+1. Select **Add query** to create a query control, and enter these values:
+ - **Data source**: `Logs`
+ - **Resource type**: `Log Analytics`
+ - **Log Analytics workspace**: _Pick one of your workspaces that has performance data_
+ - **Log Analytics workspace logs query**:
+
+ ```sql
+ Perf
+ | where CounterName == '% Processor Time'
+ | summarize CpuAverage = avg(CounterValue), CpuP95 = percentile(CounterValue, 95) by Computer
+ | order by CpuAverage desc
+ ```
+
+1. Select **Run Query** to see the results.
+
+ This result dataset is the one we want to reuse in multiple visualizations.
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-reuse-data-resultset.png" alt-text="Screenshot that shows the result of a workbooks query." lightbox="media/workbooks-commonly-used-components/workbooks-reuse-data-resultset.png":::
+
+1. Go to the **Advanced settings** tab, and for the name, enter `Cpu data`.
+1. Select **Add query** to create another query control.
+1. For **Data source**, select `Merge`.
+1. Select **Add Merge**.
+1. In the settings pane, set:
+ - **Merge Type**: `Duplicate table`
+ - **Table**: `Cpu data`
+1. Select **Run Merge**. You'll get the same result as the preceding.
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-reuse-data-duplicate.png" alt-text=" Screenshot that shows duplicate query results in a workbook." lightbox="media/workbooks-commonly-used-components/workbooks-reuse-data-duplicate.png":::
+
+1. Set the table options:
+ - Use the **Name After Merge** column to set friendly names for your result columns. For example, you can rename `CpuAverage` to `CPU utilization (avg)`, and then use **Run Merge** to update the result set.
+ - Use **Delete** to remove a column.
+ - Select the `[Cpu data].CpuP95` row.
+ - Use **Delete** in the query control toolbar.
+ - Use **Run Merge** to see the result set without the CpuP95 column
+1. Change the order of the columns by selecting **Move up** or **Move down**.
+1. Add new columns based on values of other columns by selecting **Add new item**.
+1. Style the table by using the options in **Column settings** to get the visualization you want.
+1. Add more query controls working against the `Cpu data` result set if needed.
+
+This example shows Average and P95 CPU utilization side by side:
++
+## Use Azure Resource Manager to retrieve alerts in a subscription
+
+This sample shows you how to use the Azure Resource Manager query control to list all existing alerts in a subscription. This guide will also use JSON Path transformations to format the results. See the [list of supported Resource Manager calls](/rest/api/azure/).
+
+### Set the parameters
+
+1. [Create a new empty workbook](workbooks-create-workbook.md).
+1. Select **Add parameter**, and set:
+ - **Parameter name**: `Subscription`
+ - **Parameter type**: `Subscription picker`
+ - **Required**: `Checked`
+ - **Get data from**: `Default Subscriptions`
+1. Select **Save**.
+1. Select **Add query** to create a query control, and use these settings. For this example, we're using the [Alerts Get All REST call](/rest/api/monitor/alertsmanagement/alerts/getall) to get a list of existing alerts for a subscription. For supported api-versions, see the [Azure REST API reference](/rest/api/azure/).
+ - **Data source**: `Azure Resource Manager (Preview)`
+ - **Http Method**: `GET`
+ - **Path**: `/subscriptions/{Subscription:id}/providers/Microsoft.AlertsManagement/alerts`
+ - Add the api-version parameter on the **Parameters** tab and set:
+ - **Parameter**: `api-version`
+ - **Value**: `2018-05-05`
+
+1. Select a subscription from the created subscription parameter, and select **Run Query** to see the results.
+
+ This raw JSON is returned from Resource
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-no-formatting.png" alt-text="Screenshot that shows an alert data JSON response in workbooks by using a Resource Manager provider." lightbox="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-no-formatting.png":::
+
+### Format the response
+
+You might be satisfied with the information here. But let's extract some interesting properties and format the response in a way that's easy to read.
+
+1. Go to the **Result Settings** tab.
+1. Switch **Result Format** from `Content` to `JSON Path`. [JSON Path](workbooks-jsonpath.md) is a workbook transformer.
+1. In the JSON Path settings, set **JSON Path Table** to `$.value.[*].properties.essentials`. This extracts all `"value.*.properties.essentials"` fields from the returned JSON.
+1. Select **Run Query** to see the grid.
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-grid.png" alt-text="Screenshot that shows alert data in a workbook in grid format by using a Resource Manager provider." lightbox="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-grid.png":::
+
+### Filter the results
+
+JSON Path also allows you to choose information from the generated table to show as columns.
+
+For example, if you want to filter the results to the columns **TargetResource**, **Severity**, **AlertState**, **AlertRule**, **Description**, **StartTime**, and **ResolvedTime**, you could add the following rows in the columns table in JSON Path:
+
+| Column ID | Column JSON Path |
+| :- | :-: |
+| TargetResource | $.targetResource |
+| Severity | $.severity |
+| AlertState | $.alertState |
+| AlertRule | $.alertRule |
+| Description | $.description |
+| StartTime | $.startDateTime |
+| ResolvedTime | $.monitorConditionResolvedDateTime |
++
+## Next steps
+
+* [Learn about the types of visualizations you can use to create rich visual reports with Azure Workbooks](workbooks-visualizations.md).
+* [Use drop down parameters to simplify complex reporting](workbooks-dropdowns.md).
azure-monitor Workbooks Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-limits.md
Previously updated : 06/21/2023 Last updated : 01/08/2024
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
Title: Azure Workbooks link actions
description: This article explains how to use link actions in Azure Workbooks. Previously updated : 06/21/2023 Last updated : 12/13/2023
Link actions can be accessed through workbook link components or through column
## General link actions
-| Link action | Action on click |
+| Link action | Action on select |
|:- |:-| |Generic Details| Shows the row values in a property grid context view. | |Cell Details| Shows the cell value in a property grid context view. Useful when the cell contains a dynamic type with information, for example, JSON with request properties like location and role instance. |
Link actions can be accessed through workbook link components or through column
## Application Insights
-| Link action | Action on click |
+| Link action | Action on select |
|:- |:-| |Custom Event Details| Opens the Application Insights search details with the custom event ID ("itemId") in the cell. | |Details| Similar to Custom Event Details except for dependencies, exceptions, page views, requests, and traces. |
Link actions can be accessed through workbook link components or through column
## Azure resource
-| Link action | Action on click |
+| Link action | Action on select |
|:- |:-| |ARM Deployment| Deploys an Azure Resource Manager (ARM) template. When this item is selected, more fields are displayed to let you configure which ARM template to open and parameters for the template. [See Azure Resource Manager deployment link settings](#azure-resource-manager-deployment-link-settings). | |Create Alert Rule| Creates an alert rule for a resource. | |Custom View| Opens a custom view. When this item is selected, more fields appear where you can configure the view extension, view name, and any parameters used to open the view. [See custom view link settings](#custom-view-link-settings). | |Metrics| Opens a metrics view. |
-|Resource Overview| Opens the resource's view in the portal based on the resource ID value in the cell. You can also optionally set a submenu value that will open a specific menu item in the resource view. |
+|Resource Overview| Opens the resource's view in the portal based on the resource ID value in the cell. You can also optionally set a submenu value that opens a specific menu item in the resource view. |
|Workbook (Template)| Opens a workbook template. When this item is selected, more fields appear where you can configure what template to open. | ## Link settings
This section defines where the template should come from and the parameters used
| Source | Description | |:- |:-|
-|Resource group id comes from| The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value isn't specified, the deployment will fail. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources).|
-|ARM template URI from| The URI to the ARM template itself. The template URI needs to be accessible to the users who will deploy the template. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources). For more information, see [Azure quickstart templates](https://azure.microsoft.com/resources/templates/).|
-|ARM Template Parameters|Defines the template parameters used for the template URI defined earlier. These parameters are used to deploy the template on the run page. The grid contains an **Expand** toolbar button to help fill the parameters by using the names defined in the template URI and set to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select **Refresh** to update the preview with current changes. Parameters are typically values. References are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer pane limitation** doesn't render reference parameters correctly and will show up as null/value. As a result, users won't be able to correctly deploy reference parameters from the **Template Viewer** tab.|
+|Resource group ID comes from| The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value isn't specified, the deployment fails. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources).|
+|ARM template URI from| The URI to the ARM template itself. The template URI needs to be accessible to the users who deploy the template. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources). For more information, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/).|
+|ARM Template Parameters|Defines the template parameters used for the template URI defined earlier. These parameters are used to deploy the template on the run page. The grid contains an **Expand** toolbar button to help fill the parameters by using the names defined in the template URI and set to static empty values. This option can only be used when there are no parameters in the grid and the template URI is set. The lower section is a preview of what the parameter output looks like. Select **Refresh** to update the preview with current changes. Parameters are typically values. References are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer pane limitation** doesn't render reference parameters correctly and will show up as null/value. As a result, users won't be able to correctly deploy reference parameters from the **Template Viewer** tab.|
<!-- convertborder later --> :::image type="content" source="./media/workbooks-link-actions/template-settings.png" lightbox="./media/workbooks-link-actions/template-settings.png" alt-text="Screenshot that shows the Template Settings tab." border="false"::: ### UX settings
-This section configures what you'll see before you run the Resource Manager deployment.
+This section configures what you see before you run the Resource Manager deployment.
| Source | Description | |:- |:-| |Title from| Title used on the run view. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources).| |Description from| The Markdown text used to provide a helpful description to users when they want to deploy the template. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources). <br/><br/> If you select **Static Value**, a multi-line text box appears. In this text box, you can resolve parameters by using `"{paramName}"`. Also, you can treat columns as parameters by appending `"_column"` after the column name like `{columnName_column}`. In the following example image, you can reference the column `"VMName"` by writing `"{VMName_column}"`. The value after the colon is the [parameter formatter](../visualize/workbooks-parameters.md#parameter-formatting-options). In this case, it's **value**.|
-|Run button text from| Label used on the run (execute) button to deploy the ARM template. Users will select this button to start deploying the ARM template.|
+|Run button text from| Label used on the run (execute) button to deploy the ARM template. Users select this button to start deploying the ARM template.|
<!-- convertborder later --> :::image type="content" source="./media/workbooks-link-actions/ux-settings.png" lightbox="./media/workbooks-link-actions/ux-settings.png" alt-text="Screenshot that shows the Resource Manager UX Settings tab." border="false":::
After these configurations are set, when you select the link, the view opens wit
## Custom view link settings
-Use this setting to open **Custom Views** in the Azure portal. Verify the configuration and settings. Incorrect values will cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings: via the form or URL.
+Use this setting to open **Custom Views** in the Azure portal. Verify the configuration and settings. Incorrect values cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings: via the form or URL.
> [!NOTE] > Views with a menu can't be opened in a context tab. If a view with a menu is configured to open in a context tab, no context tab is shown when the link is selected.
If the selected link type is **Workbook (Template)**, you must specify more sett
| Setting | Description | |:- |:-|
-|Workbook owner Resource Id comes from| This value is the Resource ID of the Azure resource that "owns" the workbook. Commonly, it's an Application Insights resource or a Log Analytics workspace. Inside of Azure Monitor, this value might also be the literal string `"Azure Monitor"`. When the workbook is saved, this value is what the workbook is linked to. |
+|Workbook owner Resource ID comes from| This value is the Resource ID of the Azure resource that "owns" the workbook. Commonly, it's an Application Insights resource or a Log Analytics workspace. Inside of Azure Monitor, this value might also be the literal string `"Azure Monitor"`. When the workbook is saved, this value is what the workbook is linked to. |
|Workbook resources come from| An array of Azure Resource IDs that specify the default resource used in the workbook. For example, if the template being opened shows virtual machine metrics, the values here would be virtual machine resource IDs. Many times, the owner and resources are set to the same settings. |
-|Template Id comes from| Specify the ID of the template to be opened. A community template from the gallery is the most common case. Prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If it's a link to a saved workbook or template, use the full Azure resource ID of that item. |
+|Template ID comes from| Specify the ID of the template to be opened. A community template from the gallery is the most common case. Prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If it's a link to a saved workbook or template, use the full path to the Azure resource ID of that item, for example, "/subscriptions/12345678-a1b2-1234-a1b2-c3d4e5f6/resourceGroups/rgname/providers/microsoft.insights/workbooks/1a2b3c4d-5678-abcd-xyza-1a2b3c4d5e6f. |
|Workbook Type comes from| Specify the kind of workbook template to open. The most common cases use the default or workbook option to use the value in the current workbook. | |Gallery Type comes from| This value specifies the gallery type that's displayed in the **Gallery** view of the template that opens. The most common cases use the default or workbook option to use the value in the current workbook. |
-|Location comes from| The location field should be specified if you're opening a specific workbook resource. If location isn't specified, finding the workbook content is much slower. If you know the location, specify it. If you don't know the location or are opening a template with no specific location, leave this field as `Default`.|
+|Location comes from| The location field should be specified if you're opening a specific workbook resource. If location isn't specified, finding the workbook content is slower. If you know the location, specify it. If you don't know the location or are opening a template with no specific location, leave this field as `Default`.|
|Pass specific parameters to template| Select to pass specific parameters to the template. If selected, only the specified parameters are passed to the template or else all the parameters in the current workbook are passed to the template. In that case, the parameter *names* must be the same in both workbooks for this parameter value to work.| |Workbook Template Parameters| This section defines the parameters that are passed to the target template. The name should match with the name of the parameter in the target template. Select from **Cell**, **Column**, **Parameter**, and **Static Value**. The name and value must not be empty to pass that parameter to the target template.|
-For each of the preceding settings, you must choose where the value in the linked workbook will come from. See [Link sources](#link-sources).
+For each of the preceding settings, you must choose where the value in the linked workbook comes from. See [Link sources](#link-sources).
When the workbook link is opened, the new workbook view is passed to all the values configured from the preceding settings. <!-- convertborder later -->
When the workbook link is opened, the new workbook view is passed to all the val
|Cell| Use the value in that cell in the grid as the link value. | |Column| When selected, a field appears where you can select another column in the grid. The value of that column for the row is used in the link value. This link value is commonly used to enable each row of a grid to open a different template by setting the **Template Id** field to **column**. Or it's used to open the same workbook template for different resources, if the **Workbook resources** field is set to a column that contains an Azure Resource ID. | |Parameter| When selected, a field appears where you can select a parameter. The value of that parameter is used for the value when the link is selected. |
-|Static Value| When selected, a field appears where you can enter a static value that's used in the linked workbook. This value is commonly used when all the rows in the grid will use the same value for a field. |
+|Static Value| When selected, a field appears where you can enter a static value that's used in the linked workbook. This value is commonly used when all the rows in the grid use the same value for a field. |
|Component| Use the value set in the current component of the workbook. It's common in query and metrics components to set the workbook resources in the linked workbook to those resources used in the query/metrics component, not the current workbook. | |Workbook| Use the value set in the current workbook. | |Default| Use the default value that would be used if no value were specified. This situation is common for **Gallery Type comes from**, where the default gallery would be set by the type of the owner resource. |
azure-monitor Workbooks Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-manage.md
+
+ Title: Manage Azure Monitor workbooks
+description: Understand how to Manage Azure Workbooks
++ Last updated : 01/08/2024++
+# Manage Azure Monitor Workbooks
+This article describes how to manage Azure Workbooks in the Azure portal.
+
+## Save a workbook
+
+Workbooks are shared resources. They require write access to the parent resource group to be saved.
+
+1. In the Azure portal, select the workbook.
+2. Select **Save**.
+1. Enter the **title**, **subscription**, **resource group**, and **location**.
+1. Select **Save**.
+
+By default, the workbook is auto-filled with the same settings, subscription and resource group as the LA workspace.
+By default, workbooks are saved to 'My Reports' and are only accessible by the individual user. You can also save the workbook directly to shared reports or share the workbook.
+
+## Share a workbook
+
+When you want to share a workbook or template, keep in mind that the person you want to share with must have permissions to access the workbook. They must have an Azure account, and **Monitoring Reader** permissions.
+To share a workbook or workbook template:
+
+1. In the Azure portal, select **Monitor**, and then select **Workbooks** from the left pane.
+1. Select the checkbox next to the workbook or template you want to share.
+1. Select the **Share** icon from the top toolbar.
+1. The **Share workbook** or **Share template** window opens with a URL to use for sharing the workbook.
+1. Copy the link to share the workbook, or select **Share link via email** to open your default mail app.
++
+## Delete a workbook
+
+1. In the Azure portal, select **Monitor**, and then select **Workbooks** from the left pane.
+1. Select the checkbox next to the Workbook you want to delete.
+1. Select **Delete** from the top toolbar.
+
+## Recover a deleted workbook
+When you delete an Azure Workbook, it is soft-deleted and can be recovered by contacting support. After the soft-delete period, the workbook and its content are nonrecoverable and queued for purge completely within 30 days.
+
+> [!NOTE]
+> Workbooks that were saved using bring your own storage cannot be recovered by support. You may be able to recover the workbook content from the storage account if the storage account used has enabled soft delete.
+
+## Set up Auto refresh
+
+1. In the Azure portal, select the workbook.
+1. Select **Auto refresh**, and then to select from a list of intervals for the auto-refresh. The workbook will start refreshing after the selected time interval.
+
+- Auto refresh only applies when the workbook is in read mode. If a user sets an interval of 5 minutes and after 4 minutes switches to edit mode, refreshing doesn't occur if the user is still in edit mode. But if the user returns to read mode, the interval of 5 minutes resets and the workbook will be refreshed after 5 minutes.
+- Selecting **Auto refresh** in read mode also resets the interval. If a user sets the interval to 5 minutes and after 3 minutes the user selects **Auto refresh** to manually refresh the workbook, the **Auto refresh** interval resets and the workbook will be auto-refreshed after 5 minutes.
+- The **Auto refresh** setting isn't saved with the workbook. Every time a user opens a workbook, **Auto refresh** is **Off** and needs to be set again.
+- Switching workbooks and going out of the gallery clears the **Auto refresh** interval.
+++
+## Manage workbook resources
+
+In the **Resources** section of the **Settings** tab, you can manage the resources in your workbook.
+
+- The workbook is saved in the resource marked as the **Owner**. When you browse workbooks, this is the location of the workbooks and templates you see when browsing. Select **Browse across galleries** to see the workbooks for all your resources.
+- The owner resource can't be removed.
+- Select **Add Resources** to add a default resource.
+- Select **Remove Selected Resources** to remove resources by selecting a resource or several resources.
+- When you're finished adding and removing resources, select **Apply Changes**.
+
+## Manage workbook versions
++
+The versions tab contains a list of all the available versions of this workbook. Select a version and use the toolbar to compare, view, or restore versions. Previous workbook versions are available for 90 days.
+- **Compare**: Compares the JSON of the previous workbook to the most recently saved version.
+- **View**: Opens the selected version of the workbook in a context pane.
+- **Restore**: Saves a new copy of the workbook with the contents of the selected version and overwrites any existing current content. You're prompted to confirm this action.
+
+### Compare versions
++
+> [!NOTE]
+> Version history isn't available for [bring-your-own-storage](workbooks-bring-your-own-storage.md) workbooks.
+
+## Manage workbook styles
+On this tab, you can set a padding and spacing style for the whole workbook. The possible options are **Wide**, **Standard**, **Narrow**, and **None**. The default style setting is **Standard**.
+
+## Pinning workbooks
+
+You can pin text, query, or metrics components in a workbook by using the **Pin** button on those items while the workbook is in pin mode. Or you can use the **Pin** button if the workbook author enabled settings for that element to make it visible.
+
+While in pin mode, you can select **Pin Workbook** to pin a component from this workbook to a dashboard. Select **Link to Workbook** to pin a static link to this workbook on your dashboard. You can choose a specific component in your workbook to pin.
+
+To access pin mode, select **Edit** to enter editing mode. Select **Pin** on the top bar. An individual **Pin** then appears above each corresponding workbook part's **Edit** button on the right side of the screen.
++
+> [!NOTE]
+> The state of the workbook is saved at the time of the pin. Pinned workbooks on a dashboard won't update if the underlying workbook is modified. To update a pinned workbook part, you must delete and re-pin that part.
+
+### Time ranges for pinned queries
+
+Pinned workbook query parts respect the dashboard's time range if the pinned item is configured to use a *TimeRange* parameter. The dashboard's time range value is used as the time range parameter's value. Any change of the dashboard time range causes the pinned item to update. If a pinned part is using the dashboard's time range, the subtitle of the pinned part updates to show the dashboard's time range whenever the time range changes.
+
+Pinned workbook parts using a time range parameter auto-refresh at a rate determined by the dashboard's time range. The last time the query ran appears in the subtitle of the pinned part.
+
+If a pinned component has an explicitly set time range and doesn't use a time range parameter, that time range is always used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part doesn't show the dashboard's time range. The query doesn't auto-refresh on the dashboard. The subtitle shows the last time the query executed.
+
+> [!NOTE]
+> Queries that use the *merge* data source aren't currently supported when pinning to dashboards.
+
+### Enable Trusted hosts
+
+Enable a trusted source or mark this workbook as trusted in this browser.
+
+| Control | Definition |
+| -- | -- |
+| Mark workbook as trusted | If enabled, this workbook can call any endpoint, whether the host is marked as trusted or not. A workbook is trusted if it's a new workbook, an existing workbook that's saved, or is explicitly marked as a trusted workbook. |
+| URL grid | A grid to explicitly add trusted hosts. |
azure-monitor Workbooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md
Workbooks are helpful for scenarios such as:
Watch this video to see how you can use Azure Workbooks to get insights and visualize your data. > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5a1su]
+## Accessing Azure Workbooks
+
+You can get to Azure workbooks in a few ways:
+
+- In the [Azure portal](https://portal.azure.com), select **Monitor** > **Workbooks** from the menu bars on the left.
+
+ :::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot that shows Workbooks in the menu.":::
+
+- In a **Log Analytics workspaces** page, select **Workbooks** at the top of the page.
+
+ :::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks on the Log Analytics workspaces page.":::
+
+When the gallery opens, select a saved workbook or a template. You can also search for a name in the search box.
+ ## The gallery The gallery lists all the saved workbooks and templates in your current environment. Select **Browse across galleries** to see the workbooks for all your resources.
Standard Azure roles that provide access to workbooks:
For custom roles, you must add `microsoft.insights/workbooks/write` to the user's permissions to edit and save a workbook. For more information, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role. + ## Next steps
-[Get started with Azure Workbooks](workbooks-getting-started.md)
+[Create an Azure Workbook](workbooks-create-workbook.md)
azure-monitor Workbooks Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-resources.md
This approach can be used to bind resources to other controls like metrics.
## Next steps
-[Getting started with Azure Workbooks](workbooks-getting-started.md)
+[Getting started with Azure Workbooks](workbooks-overview.md)
azure-monitor Workbooks Retrieve Legacy Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-retrieve-legacy-workbooks.md
- Title: Retrieve legacy and private workbooks
-description: Learn how to retrieve deprecated legacy and private Azure workbooks.
--- Previously updated : 06/21/2023----
-# Retrieve legacy Application Insights workbooks
-
-Private and legacy workbooks have been deprecated and aren't accessible from the Azure portal. If you're looking for the deprecated workbook that you forgot to convert before the deadline, you can use this process to retrieve the content of your old workbook and load it into a new workbook. This tool will only be available for a limited time.
-
-Application Insights Workbooks, also known as "Legacy Workbooks", are stored as a different Azure resource type than all other Azure Workbooks. These different Azure resource types are now being merged one single standard type so that you can take advantage of all the existing and new functionality available in standard Azure Workbooks. For example:
-
-* Converted legacy workbooks can be queried via Azure Resource Graph (ARG), and show up in other standard Azure views of resources in a resource group or subscription.
-* Converted legacy workbooks can support top level ARM template features like other resource types, including, but not limited to:
- * Tags
- * Policies
- * Activity Log / Change Tracking
- * Resource locks
-* Converted legacy workbooks can support [ARM templates](workbooks-automate.md).
-* Converted legacy workbooks can support the [BYOS](workbooks-bring-your-own-storage.md) feature.
-* Converted legacy workbooks can be saved in region of your choice.
-
-The legacy workbook deprecation doesn't change where you find your workbooks in the Azure portal. The legacy workbooks are still visible in the Workbooks section of Application Insights. The deprecation won't affect the content of your workbook.
-
-> [!NOTE]
->
-> - After April 15 2021, you will not be able to save legacy workbooks.
-> - Use `Save as` on a legacy workbook to create a standard Azure workbook.
-> - Any new workbook you create will be a standard workbook.
-
-## Why isn't there an automatic conversion?
-- The write permissions for legacy workbooks are only based on Azure role based access control on the Application Insights resource itself. A user may not be allowed to create new workbooks in that resource group. If the workbooks were auto migrated, they could fail to be moved, or they could be created but then a user might not be able to delete them after the fact.-- Legacy workbooks support "My" (private) workbooks, which is no longer supported by Azure Workbooks. A migration would cause those private workbooks to become publicly visible to users with read access to that same resource group.-- Usage of links/group content loaded from saved Legacy workbooks would become broken. Authors will need to manually update these links to point to the new saved items.-
-For these reasons, we suggest that users manually migrate the workbooks they want to keep.
-## Convert a legacy Application Insights workbook
-1. Identify legacy workbooks. In the gallery view, legacy workbooks have a warning icon. When you open a legacy workbook, there's a banner.
-
- :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-legacy-warning.png" alt-text="Screenshot of the warning symbol on a deprecated workbook.":::
-
- :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-legacy-banner.png" alt-text="Screenshot of the banner at the top of a deprecated workbook.":::
-
-1. Convert the legacy workbooks. For any legacy workbook you want to keep after June 30 2021:
-
- 1. Open the workbook, and then from the toolbar, select **Edit**, then **Save As**.
- 1. Enter the workbook name.
- 1. Select a subscription, resource group, and region where you have write access.
- 1. If the Legacy Workbook uses links to other Legacy Workbooks, or loading workbook content in groups, those items will need to be updated to point to the newly saved workbook.
- 1. After you have saved the workbook, you can delete the legacy Workbook, or update its contents to be a link to the newly saved workbook.
-
-1. Verify permissions. For legacy workbooks, permissions were based on the Application Insights specific roles, like Application Insights Contributor. Verify that users of the new workbook have the appropriate standard Monitoring Reader/Contributor or Workbook Reader/Contributor roles so that they can see and create Workbooks in the appropriate resource groups.
-
-For more information, see [access control](workbooks-overview.md#access-control).
-
-After deprecation of the legacy workbooks, you'll still be able to retrieve the content of Legacy Workbooks for a limited time by using Azure CLI or PowerShell tools, to query `microsoft.insights/components/[name]/favorites` for the specific resource using `api-version=2015-05-01`.
-## Convert a private workbook
-
-1. Open a new or empty workbook.
-1. In the toolbar, select **Edit** and then navigate to the advanced editor.
-
- :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-advanced-editor.png" alt-text="Screenshot of the advanced editor used to retrieve deprecated workbooks.":::
-
-1. Copy the [workbook json](#json-for-private-workbook-conversion) and paste it into your open advanced editor.
-1. Select **Apply** at the top right.
-1. Select the subscription and resource group and category of the workbook you'd like to retrieve.
-1. The grid at the bottom of this workbook lists all the private workbooks in the selected subscription or resource group.
-1. Select one of the workbooks in the grid. Your workbook should look something like this:
-
- :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-private.png" alt-text="Screenshot of a deprecated private workbook converted to a standard workbook." lightbox="media//workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-private.png":::
-
-1. Select **Open Content as Workbook** at the bottom of the workbook.
-1. A new workbook appears with the content of the old private workbook that you selected. Save the workbook as a standard workbook.
-1. You have to re-create links to the deprecated workbook or its contents, including dashboard pins and URL links.
-## Convert a favorites-based (legacy) workbook
-
-1. Navigate to your Application Insights Resource > Workbooks gallery.
-1. Open a new or empty workbook.
-1. Select Edit in the toolbar and navigate to the advanced editor.
-
- :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-advanced-editor.png" alt-text="Screenshot of the advanced editor used to retrieve deprecated workbooks.":::
-
-1. Copy the [workbook json](#json-for-private-workbook-conversion) and paste it into your open advanced editor.
-1. Select **Apply**.
-1. The grid at the bottom of this workbook lists all the legacy workbooks within the current AppInsights resource.
-1. Select one of the workbooks in the grid. Your workbook should now look something like this:
-
- :::image type="content" source="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-legacy.png" alt-text="Screenshot of a deprecated legacy workbook converted to a standard workbook." lightbox="media/workbooks-retrieve-legacy-workbooks/workbooks-retrieve-deprecated-legacy.png":::
-
-1. Select **Open Content as Workbook** at the bottom of the workbook.
-1. A new workbook appears with the content of the old private workbook that you selected. Save the workbook as a standard workbook.
-1. You have to re-create links to the deprecated workbook or its contents, including dashboard pins and URL links.
-
-## JSON for legacy workbook conversion
-
-```json
-{
- "version": "Notebook/1.0",
- "items": [
- {
- "type": 9,
- "content": {
- "version": "KqlParameterItem/1.0",
- "parameters": [
- {
- "id": "876235fc-ef67-418d-87f5-69f496be171b",
- "version": "KqlParameterItem/1.0",
- "name": "resource",
- "type": 5,
- "typeSettings": {
- "additionalResourceOptions": [
- "value::1"
- ],
- "componentIdOnly": true
- },
- "timeContext": {
- "durationMs": 86400000
- },
- "defaultValue": "value::1"
- }
- ],
- "style": "pills",
- "queryType": 0,
- "resourceType": "microsoft.insights/components"
- },
- "conditionalVisibility": {
- "parameterName": "debug",
- "comparison": "isNotEqualTo"
- },
- "name": "resource selection"
- },
- {
- "type": 1,
- "content": {
- "json": "# Legacy (Favorites based) Workbook Conversion\r\n\r\nThis workbook shows favorite based (legacy) workbooks in this Application Insights resource: \r\n\r\n{resource:grid}\r\n\r\nThe grid below will show the favorite workbooks found, and allows you to copy the contents, or open them as a full Azure Workbook where they can be saved."
- },
- "name": "text - 5"
- },
- {
- "type": 3,
- "content": {
- "version": "KqlItem/1.0",
- "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GETARRAY\",\"path\":\"{resource}/favorites\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2015-05-01\"},{\"key\":\"sourceType\",\"value\":\"notebook\"},{\"key\":\"canFetchContent\",\"value\":\"false\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"columns\":[{\"path\":\"$.Name\",\"columnid\":\"name\"},{\"path\":\"$.FavoriteId\",\"columnid\":\"id\"},{\"path\":\"$.TimeModified\",\"columnid\":\"modified\",\"columnType\":\"datetime\"},{\"path\":\"$.FavoriteType\",\"columnid\":\"type\"}]}}]}",
- "size": 0,
- "title": "Legacy Workbooks (Select an item to see contents)",
- "noDataMessage": "No legacy workbooks found",
- "noDataMessageStyle": 3,
- "exportedParameters": [
- {
- "fieldName": "id",
- "parameterName": "favoriteId"
- },
- {
- "fieldName": "name",
- "parameterName": "name",
- "parameterType": 1
- }
- ],
- "queryType": 12,
- "gridSettings": {
- "rowLimit": 1000,
- "filter": true
- }
- },
- "name": "list favorites"
- },
- {
- "type": 9,
- "content": {
- "version": "KqlParameterItem/1.0",
- "parameters": [
- {
- "id": "8d78556d-a4f3-4868-bf06-9e0980246d31",
- "version": "KqlParameterItem/1.0",
- "name": "config",
- "type": 1,
- "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GET\",\"path\":\"{resource}/favorites/{favoriteId}\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2015-05-01\"},{\"key\":\"sourceType\",\"value\":\"notebook\"},{\"key\":\"canFetchContent\",\"value\":\"true\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"columns\":[{\"path\":\"$.Config\",\"columnid\":\"Content\"}]}}]}",
- "timeContext": {
- "durationMs": 86400000
- },
- "queryType": 12
- }
- ],
- "style": "pills",
- "queryType": 12
- },
- "conditionalVisibility": {
- "parameterName": "debug",
- "comparison": "isNotEqualTo"
- },
- "name": "turn response into param"
- },
- {
- "type": 11,
- "content": {
- "version": "LinkItem/1.0",
- "style": "list",
- "links": [
- {
- "id": "fc93ee9e-d5b2-41de-b74a-1fb62f0df49e",
- "linkTarget": "OpenBlade",
- "linkLabel": "Open Content as Workbook",
- "style": "primary",
- "bladeOpenContext": {
- "bladeName": "UsageNotebookBlade",
- "extensionName": "AppInsightsExtension",
- "bladeParameters": [
- {
- "name": "ComponentId",
- "source": "parameter",
- "value": "resource"
- },
- {
- "name": "NewNotebookData",
- "source": "parameter",
- "value": "config"
- }
- ]
- }
- }
- ]
- },
- "conditionalVisibility": {
- "parameterName": "config",
- "comparison": "isNotEqualTo"
- },
- "name": "links - 4"
- }
- ],
- "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
-}
-```
--
-## JSON for private workbook conversion
-
-```json
-{
- "version": "Notebook/1.0",
- "items": [
- {
- "type": 9,
- "content": {
- "version": "KqlParameterItem/1.0",
- "crossComponentResources": [
- "{Subscription}"
- ],
- "parameters": [
- {
- "id": "1f74ed9a-e3ed-498d-bd5b-f68f3836a117",
- "version": "KqlParameterItem/1.0",
- "name": "Subscription",
- "type": 6,
- "isRequired": true,
- "typeSettings": {
- "additionalResourceOptions": [
- "value::1"
- ],
- "includeAll": false,
- "showDefault": false
- }
- },
- {
- "id": "b616a3a3-4271-4208-b1a9-a92a78efed08",
- "version": "KqlParameterItem/1.0",
- "name": "ResourceGroup",
- "label": "Resource group",
- "type": 2,
- "isRequired": true,
- "query": "Resources\r\n| summarize by resourceGroup\r\n| order by resourceGroup asc\r\n| project id=resourceGroup, resourceGroup",
- "crossComponentResources": [
- "{Subscription}"
- ],
- "typeSettings": {
- "additionalResourceOptions": [
- "value::1"
- ],
- "showDefault": false
- },
- "queryType": 1,
- "resourceType": "microsoft.resourcegraph/resources"
- },
- {
- "id": "3872fc90-1467-4b01-81ef-d82d90665d72",
- "version": "KqlParameterItem/1.0",
- "name": "Category",
- "type": 2,
- "description": "Workbook Category",
- "isRequired": true,
- "typeSettings": {
- "additionalResourceOptions": [],
- "showDefault": false
- },
- "jsonData": "[\"workbook\",\"sentinel\",\"usage\",\"tsg\",\"usageMetrics\",\"workItems\",\"performance-websites\",\"performance-appinsights\",\"performance-documentdb\",\"performance-storage\",\"performance-storageclassic\",\"performance-vm\",\"performance-vmclassic\",\"performance-sqlserverdatabases\",\"performance-virtualnetwork\",\"performance-virtualmachinescalesets\",\"performance-computedisks\",\"performance-networkinterfaces\",\"performance-logicworkflows\",\"performance-appserviceplans\",\"performance-applicationgateway\",\"performance-runbooks\",\"performance-servicebusqueues\",\"performance-iothubs\",\"performance-networkroutetables\",\"performance-cognitiveserviceaccounts\",\"performance-containerservicemanagedclusters\",\"performance-servicefabricclusters\",\"performance-cacheredis\",\"performance-eventhubnamespaces\",\"performance-hdinsightclusters\",\"failure-websites\",\"failure-appinsights\",\"failure-documentdb\",\"failure-storage\",\"failure-storageclassic\",\"failure-vm\",\"failure-vmclassic\",\"failure-sqlserverdatabases\",\"failure-virtualnetwork\",\"failure-virtualmachinescalesets\",\"failure-computedisks\",\"failure-networkinterfaces\",\"failure-logicworkflows\",\"failure-appserviceplans\",\"failure-applicationgateway\",\"failure-runbooks\",\"failure-servicebusqueues\",\"failure-iothubs\",\"failure-networkroutetables\",\"failure-cognitiveserviceaccounts\",\"failure-containerservicemanagedclusters\",\"failure-servicefabricclusters\",\"failure-cacheredis\",\"failure-eventhubnamespaces\",\"failure-hdinsightclusters\",\"storage-insights\",\"cosmosdb-insights\",\"vm-insights\",\"container-insights\",\"keyvaults-insights\",\"backup-insights\",\"rediscache-insights\",\"servicebus-insights\",\"eventhub-insights\",\"workload-insights\",\"adxcluster-insights\",\"wvd-insights\",\"activitylog-insights\",\"hdicluster-insights\",\"laws-insights\",\"hci-insights\"]",
- "defaultValue": "workbook"
- }
- ],
- "queryType": 1,
- "resourceType": "microsoft.resourcegraph/resources"
- },
- "name": "resource selection"
- },
- {
- "type": 1,
- "content": {
- "json": "# Private Workbook Conversion\r\n\r\nThis workbook shows private workbooks within the current subscription / resource group: \r\n\r\n| Subscription | Resource Group | \r\n|--|-|\r\n|{Subscription}|{ResourceGroup} |\r\n\r\nThe grid below will show the private workbooks found, and allows you to copy the contents, or open them as a full Azure Workbook where they can be saved.\r\n\r\nUse the button below to load the selected private workbook content into a new workbook. From there you can save it as a new workbook."
- },
- "name": "text - 5"
- },
- {
- "type": 3,
- "content": {
- "version": "KqlItem/1.0",
- "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GETARRAY\",\"path\":\"/{Subscription}/resourceGroups/{ResourceGroup}/providers/microsoft.insights/myworkbooks\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2020-10-20\"},{\"key\":\"category\",\"value\":\"{Category}\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"tablePath\":\"$..[?(@.kind == \\\"user\\\")]\",\"columns\":[{\"path\":\"$.properties.displayName\",\"columnid\":\"name\"},{\"path\":\"$.name\",\"columnid\":\"id\"},{\"path\":\"$.kind\",\"columnid\":\"type\",\"columnType\":\"string\"},{\"path\":\"$.properties.timeModified\",\"columnid\":\"modified\",\"columnType\":\"datetime\"},{\"path\":\"$.properties.sourceId\",\"columnid\":\"resource\",\"columnType\":\"string\"}]}}]}",
- "size": 1,
- "title": "Private Workbooks",
- "noDataMessage": "No private workbooks found",
- "noDataMessageStyle": 3,
- "exportedParameters": [
- {
- "fieldName": "id",
- "parameterName": "id"
- },
- {
- "fieldName": "name",
- "parameterName": "name",
- "parameterType": 1
- },
- {
- "fieldName": "resource",
- "parameterName": "resource",
- "parameterType": 1
- }
- ],
- "queryType": 12,
- "gridSettings": {
- "formatters": [
- {
- "columnMatch": "resource",
- "formatter": 13,
- "formatOptions": {
- "linkTarget": null,
- "showIcon": true
- }
- }
- ],
- "rowLimit": 1000,
- "filter": true,
- "labelSettings": [
- {
- "columnId": "resource",
- "label": "Linked To"
- }
- ]
- },
- "sortBy": []
- },
- "name": "list private workbooks"
- },
- {
- "type": 9,
- "content": {
- "version": "KqlParameterItem/1.0",
- "parameters": [
- {
- "id": "8d78556d-a4f3-4868-bf06-9e0980246d31",
- "version": "KqlParameterItem/1.0",
- "name": "config",
- "type": 1,
- "query": "{\"version\":\"ARMEndpoint/1.0\",\"data\":null,\"headers\":[],\"method\":\"GET\",\"path\":\"{Subscription}/resourceGroups/{ResourceGroup}/providers/microsoft.insights/myworkbooks/{id}\",\"urlParams\":[{\"key\":\"api-version\",\"value\":\"2020-10-20\"},{\"key\":\"sourceType\",\"value\":\"notebook\"},{\"key\":\"canFetchContent\",\"value\":\"true\"}],\"batchDisabled\":false,\"transformers\":[{\"type\":\"jsonpath\",\"settings\":{\"columns\":[{\"path\":\"$..serializedData\",\"columnid\":\"Content\"}]}}]}",
- "timeContext": {
- "durationMs": 86400000
- },
- "queryType": 12
- }
- ],
- "style": "pills",
- "queryType": 12
- },
- "conditionalVisibility": {
- "parameterName": "debug",
- "comparison": "isNotEqualTo"
- },
- "name": "turn response into param"
- },
- {
- "type": 11,
- "content": {
- "version": "LinkItem/1.0",
- "style": "list",
- "links": [
- {
- "id": "fc93ee9e-d5b2-41de-b74a-1fb62f0df49e",
- "linkTarget": "OpenBlade",
- "linkLabel": "Open Content as Workbook",
- "style": "primary",
- "bladeOpenContext": {
- "bladeName": "UsageNotebookBlade",
- "extensionName": "AppInsightsExtension",
- "bladeParameters": [
- {
- "name": "ComponentId",
- "source": "parameter",
- "value": "resource"
- },
- {
- "name": "NewNotebookData",
- "source": "parameter",
- "value": "config"
- }
- ]
- }
- }
- ]
- },
- "conditionalVisibility": {
- "parameterName": "config",
- "comparison": "isNotEqualTo"
- },
- "name": "links - 4"
- }
- ],
- "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
-}
-```
azure-monitor Workbooks Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text.md
If data is coming from a query, you can select the option to pre-format the JSON
## Next steps
-[Get started with Azure Workbooks](workbooks-getting-started.md)
+[Get started with Azure Workbooks](workbooks-overview.md)
azure-monitor Workbooks Time Brushing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-time-brushing.md
+
+ Title: Azure Workbooks time brushing
+description: Learn about time brushing in Azure Monitor workbooks.
++ Last updated : 01/08/2024+++
+# Time range brushing
+
+Time range brushing allows a user to "brush" or "scrub" a range on a chart and have that range output as a parameter value.
++
+You can also choose to only export a parameter when a range is explicitly brushed:
+
+ - If this setting is cleared (default), the parameter always has a value. When the parameter isn't brushed, the value is the full time range displayed in the chart.
+ - If this setting is selected, the parameter has no value before the user brushes the parameter. The value is only set after a user brushes the parameter.
+
+## Brushing in a metrics chart
+
+When you enable time brushing on a metrics chart, you can "brush" a time by dragging the mouse on the time chart.
++
+After the brush has stopped, the metrics chart zooms in to that range and exports the range as a time range parameter.
+An icon on the toolbar in the upper-right corner is active to reset the time range back to its original, unzoomed time range.
+
+## Brushing in a query chart
+
+When you enable time brushing on a query chart, indicators appear that you can drag, or you can brush a range on the time chart.
++
+After the brush has stopped, the query chart shows that range as a time range parameter but won't zoom in. This behavior is different than the behavior of metrics charts. Because of the complexity of user-written queries, it might not be possible for workbooks to correctly update the range used by the query in the query content directly. If the query is using a time range parameter, it's possible to get this behavior by using a [global parameter](workbooks-parameters.md#global-parameters) instead.
+
+An icon on the toolbar in the upper-right corner is active to reset the time range back to its original, un-zoomed time range.
azure-monitor Workbooks Traffic Lights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-traffic-lights.md
+
+ Title: Azure Workbooks visual indicators and icons
+description: Learn about all how to create visual indicators and icons, such as traffic lights in Azure Monitor Workbooks.
++++ Last updated : 01/08/2024++
+# Visual indicators and icons
+
+You can summarize status by using a simple visual indication instead of presenting the full range of data values. For example, you can categorize your computers by CPU utilization as cold, warm, or hot and categorize performance as satisfied, tolerated, or frustrated. You can use an indicator or icon that represents the status next to the underlying metric.
++
+## Create a traffic light icon
+
+The following example shows how to set up a traffic light icon per computer based on the CPU utilization metric.
+
+1. [Create a new empty workbook](workbooks-create-workbook.md).
+1. [Add a parameter](workbooks-create-workbook.md#add-parameters), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
+1. Select **Add query** to add a log query control to the workbook.
+1. For **Query type**, select `Logs`, and for **Resource type**, select `Log Analytics`. Select a Log Analytics workspace in your subscription that has VM performance data as a resource.
+1. In the query editor, enter:
+
+ ```
+ Perf
+ | where ObjectName == 'Processor' and CounterName == '% Processor Time'
+ | summarize Cpu = percentile(CounterValue, 95) by Computer
+ | join kind = inner (Perf
+ | where ObjectName == 'Processor' and CounterName == '% Processor Time'
+ | make-series Trend = percentile(CounterValue, 95) default = 0 on TimeGenerated from {TimeRange:start} to {TimeRange:end} step {TimeRange:grain} by Computer
+ ) on Computer
+ | project-away Computer1, TimeGenerated
+ | order by Cpu desc
+ ```
+
+1. Set **Visualization** to `Grid`.
+1. Select **Column Settings**.
+1. In the **Columns** section, set:
+ - **Cpu**
+ - **Column renderer**: `Thresholds`
+ - **Custom number formatting**: `checked`
+ - **Units**: `Percentage`
+ - **Threshold settings** (last two need to be in order):
+ - **Icon**: `Success`, **Operator**: `Default`
+ - **Icon**: `Critical`, **Operator**: `>`, **Value**: `80`
+ - **Icon**: `Warning`, **Operator**: `>`, **Value**: `60`
+ - **Trend**
+ - **Column renderer**: `Spark line`
+ - **Color palette**: `Green to Red`
+ - **Minimum value**: `60`
+ - **Maximum value**: `80`
+1. Select **Save and Close** to commit the changes.
++
+You can also pin this grid to a dashboard by using **Pin to dashboard**. The pinned grid automatically binds to the time range in the dashboard.
++
+## Next Steps
+
+[Learn about the types of visualizations you can use to create rich visual reports with Azure Workbooks](workbooks-visualizations.md).
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
Workbooks support these kinds of visual components:
## Next steps
-[Get started with Azure Workbooks](workbooks-getting-started.md)
+[Get started with Azure Workbooks](workbooks-overview.md)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Alerts|[Create or edit a metric alert rule](alerts/alerts-create-new-alert-rule.
Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|Custom events code samples and instructions have been added to .NET Core / .NET tabs.| Application-Insights|[Migrate availability tests](app/availability-test-migration.md)|We've clarified the URL ping tests retirement statement. Migrate your URL ping tests as soon as possible using the PowerShell scripts provided in this article.| Application-Insights|[Enable Azure Monitor Application Insights Real User Monitoring](app/javascript-sdk.md)|Additional guidance has been added on when to use the npm package.|
-Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|We confirmed that migrating from classic to workspace-based resources doesn't introduce application downtime or restarts, and it does not change your existing instrumentation key or connection string.|"
+Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|We confirmed that migrating from classic to workspace-based resources doesn't introduce application downtime or restarts, and it does not change your existing instrumentation key or connection string.|
Logs|[Correlate data in Azure Data Explorer and Azure Resource Graph with data in a Log Analytics workspace](logs/azure-monitor-data-explorer-proxy.md)|Explained how to query Azure Data Explorer external tables using the `adx("")` expression. | Logs|[Logs Ingestion API in Azure Monitor](logs/logs-ingestion-api-overview.md)|Updated Log Ingestion API version.| Profiler|[Profile production applications in Azure with Application Insights Profiler](profiler/profiler-overview.md)|Add support for Java profiler and link to docs from .NET profiler overview.|
Containers|[Monitor Kubernetes clusters using Azure services and cloud native to
Containers|[Monitor Azure Kubernetes Service (AKS)](/azure/aks/monitor-aks)|New article providing simplified introduction to monitoring AKS cluster.| Containers|[Container insights overview](containers/container-insights-overview.md)|Rewritten for to include new features and managed services.| Essentials|[Send Prometheus metrics to Log Analytics workspace with Container insights](containers/container-insights-prometheus-logs.md)|Updated to simplify article to only legacy method of sending Prometheus metrics to Log Analytics workspace.|
-Essentials|[Collect Prometheus metrics from an AKS cluster](containers/prometheus-metrics-enable.md)|Updated to include additional onboarding methods.|
+Essentials|[Collect Prometheus metrics from an AKS cluster](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)|Updated to include additional onboarding methods.|
Essentials|[Azure Monitor managed service for Prometheus rule groups](essentials/prometheus-rule-groups.md)|Expanded "Limiting rules to a specific cluster"| Logs|[Enable cost optimization settings](containers/container-insights-cost-config.md)|Updated for portal updates and additional details on workspace tables.|
-Logs|[Enable the ContainerLogV2 schema](containers/container-insights-logging-v2.md)|Updated configuration section.|
+Logs|[Enable the ContainerLogV2 schema](containers/container-insights-logs-schema.md)|Updated configuration section.|
Logs|[Manage access to Log Analytics workspaces](logs/manage-access.md)|Simplified flow for setting table-level access.| Logs|[Query data in Azure Data Explorer and Azure Resource Graph from Azure Monitor](logs/azure-monitor-data-explorer-proxy.md)|Azure Monitor now lets you query data in Azure Resource Graph from your Log Analytics workspace. |
Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|Up
Application-Insights|[Azure Monitor overview](overview.md)|Updated Azure Monitor overview graphics along with related content.| Containers|[Metric alert rules in Container insights (preview)](containers/container-insights-metric-alerts.md)|Updated to indicate deprecation of metric alerts.| Containers|[Azure Monitor Container insights for Azure Arc-enabled Kubernetes clusters](containers/container-insights-enable-arc-enabled-clusters.md)|Added option for Azure Monitor Private Link Scope (AMPLS) + Proxy.|
-Essentials|[Collect Prometheus metrics from an AKS cluster (preview)](essentials/prometheus-metrics-enable.md)|Enabled Windows metric collection metrics add-on.|
+Essentials|[Collect Prometheus metrics from an AKS cluster (preview)](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)|Enabled Windows metric collection metrics add-on.|
Essentials|[Query Prometheus metrics by using the API and PromQL](essentials/prometheus-api-promql.md)|New article: Query Azure Monitor workspaces by using REST and PromQL.| Essentials|[Configure remote write for Azure Monitor managed service for Prometheus by using Azure Active Directory authentication (preview)](essentials/prometheus-remote-write-active-directory.md)|Added Prometheus remote write Active Directory relabel.| Essentials|[Built-in policies for Azure Monitor](essentials/diagnostics-settings-policies-deployifnotexists.md)|Added new built-in policies to create diagnostic settings in Azure Monitor with deploy-if-not-exists defaults.|
Containers|[Enable cost-optimization settings (preview)](containers/container-in
Essentials|[Data collection transformations in Azure Monitor](essentials/data-collection-transformations.md)|Added section and sample for using transformations to send to multiple destinations.| Essentials|[Custom metrics in Azure Monitor (preview)](essentials/metrics-custom-overview.md)|Added reference to the limit of 64 KB on the combined length of all custom metrics names.| Essentials|[Azure monitoring REST API walkthrough](essentials/rest-api-walkthrough.md)|Refreshed REST API walkthrough.|
-Essentials|[Collect Prometheus metrics from AKS cluster (preview)](essentials/prometheus-metrics-enable.md)|Added enabling Prometheus metric collection by using Azure Policy and Bicep.|
+Essentials|[Collect Prometheus metrics from AKS cluster (preview)](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)|Added enabling Prometheus metric collection by using Azure Policy and Bicep.|
Essentials|[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](essentials/prometheus-metrics-multiple-workspaces.md)|Updated sending metrics to multiple Azure Monitor workspaces.| General|[Analyze and visualize data](best-practices-analysis.md)|Revised the article about analyzing and visualizing monitoring data to provide a comparison of the different visualization tools and guide customers on when to choose each tool for their implementation. | Logs|[Tutorial: Send data to Azure Monitor Logs by using the REST API (Resource Manager templates)](logs/tutorial-logs-ingestion-api.md)|Made minor fixes and updated sample data.|
Change-Analysis|[Scenarios for using Change Analysis in Azure Monitor](change/ch
Change-Analysis|[Scenarios for using Change Analysis in Azure Monitor](change/change-analysis-query.md)|Merged two low-engagement docs into Visualizations article and removed from TOC.| Change-Analysis|[Scenarios for using Change Analysis in Azure Monitor](change/change-analysis-visualizations.md)|Merged two low-engagement docs into Visualizations article and removed from TOC.| Change-Analysis|[Track a web app outage by using Change Analysis](change/tutorial-outages.md)|Added new section on virtual network changes to the tutorial.|
-Containers|[Azure Monitor container insights for Azure Kubernetes Service hybrid clusters (preview)](containers/container-insights-enable-provisioned-clusters.md)|New article.|
+Containers|[Azure Monitor container insights for Azure Kubernetes Service hybrid clusters (preview)](containers/kubernetes-monitoring-enable.md?tabs=cli)|New article.|
Containers|[Syslog collection with Container insights (preview)](containers/container-insights-syslog.md)|New article.| Essentials|[Query Prometheus metrics by using Azure Workbooks (preview)](essentials/prometheus-workbooks.md)|New article.| Essentials|[Azure Workbooks data sources](visualize/workbooks-data-sources.md)|Added section for Prometheus metrics.|
Containers|[Enable Container insights for Azure Kubernetes Service cluster](cont
Containers Prometheus|[Query logs from Container insights](containers/container-insights-log-query.md)|Updated to include log queries for Prometheus data.| Containers Prometheus|[Collect Prometheus metrics with Container insights](containers/container-insights-prometheus.md?tabs=cluster-wide)|Updated to include Azure Monitor managed service for Prometheus.| Essentials Prometheus|[Metrics in Azure Monitor](essentials/data-platform-metrics.md)|Updated to include Azure Monitor managed service for Prometheus.|
-Essentials Prometheus|<ul> <li> [Azure Monitor workspace overview (preview)](essentials/azure-monitor-workspace-overview.md?tabs=azure-portal) </li><li> [Overview of Azure Monitor managed service for Prometheus (preview)](essentials/prometheus-metrics-overview.md) </li><li>[Rule groups in Azure Monitor managed service for Prometheus (preview)](essentials/prometheus-rule-groups.md)</li><li>[Remote-write in Azure Monitor managed service for Prometheus (preview)](essentials/prometheus-remote-write-managed-identity.md) </li><li>[Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](essentials/prometheus-grafana.md)</li><li>[Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)](essentials/prometheus-metrics-troubleshoot.md)</li><li>[Default Prometheus metrics configuration in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-default.md)</li><li>[Scrape Prometheus metrics at scale in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-scale.md)</li><li>[Customize scraping of Prometheus metrics in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-configuration.md)</li><li>[Create, validate, and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-validate.md)</li><li>[Minimal Prometheus ingestion profile in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-configuration-minimal.md)</li><li>[Collect Prometheus metrics from AKS cluster (preview)](essentials/prometheus-metrics-enable.md)</li><li>[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](essentials/prometheus-metrics-multiple-workspaces.md) </li></ul> |New articles: Public preview of Azure Monitor managed service for Prometheus.|
+Essentials Prometheus|<ul> <li> [Azure Monitor workspace overview (preview)](essentials/azure-monitor-workspace-overview.md?tabs=azure-portal) </li><li> [Overview of Azure Monitor managed service for Prometheus (preview)](essentials/prometheus-metrics-overview.md) </li><li>[Rule groups in Azure Monitor managed service for Prometheus (preview)](essentials/prometheus-rule-groups.md)</li><li>[Remote-write in Azure Monitor managed service for Prometheus (preview)](essentials/prometheus-remote-write-managed-identity.md) </li><li>[Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](essentials/prometheus-grafana.md)</li><li>[Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)](essentials/prometheus-metrics-troubleshoot.md)</li><li>[Default Prometheus metrics configuration in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-default.md)</li><li>[Scrape Prometheus metrics at scale in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-scale.md)</li><li>[Customize scraping of Prometheus metrics in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-configuration.md)</li><li>[Create, validate, and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-validate.md)</li><li>[Minimal Prometheus ingestion profile in Azure Monitor (preview)](essentials/prometheus-metrics-scrape-configuration-minimal.md)</li><li>[Collect Prometheus metrics from AKS cluster (preview)](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)</li><li>[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](essentials/prometheus-metrics-multiple-workspaces.md) </li></ul> |New articles: Public preview of Azure Monitor managed service for Prometheus.|
Essentials Prometheus|[Azure Monitor managed service for Prometheus remote write - managed identity (preview)](./essentials/prometheus-remote-write-managed-identity.md)|Added information that verifies Prometheus remote write is working correctly.| Essentials|[Azure resource logs](./essentials/resource-logs.md)|Clarified which blob's logs are written to, and when.| Essentials|[Resource Manager template samples for Azure Monitor](resource-manager-samples.md?tabs=portal)|Added template deployment methods.|
All references to unsupported versions of .NET and .NET CORE are scrubbed from A
| Article | Description | |:|:|
-| [Configure ContainerLogv2 schema (preview) for Container insights](containers/container-insights-logging-v2.md) | New article: Describes new schema for container logs. |
+| [Configure ContainerLogv2 schema (preview) for Container insights](containers/container-insights-logs-schema.md) | New article: Describes new schema for container logs. |
| [Enable Container insights](containers/container-insights-onboard.md) | Rewritten to improve clarity. | | [Resource Manager template samples for Container insights](containers/resource-manager-container-insights.md) | Added Bicep examples. | ### Insights
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
Last updated 01/26/2023
-# What is Azure NetApp Files
+# What is Azure NetApp Files?
-Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides NAS volumes as a service for which you can create NetApp accounts, capacity pools, select service and performance levels, create volumes, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises. Azure NetApp Files supports SMB and NFS protocols and can be used for various use cases such as file sharing, home directories, databases, high-performance computing and more. Additionally, it also provides built-in availability, data protection and disaster recovery capabilities.
+Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides volumes as a service for which you can create NetApp accounts, capacity pools, select service and performance levels, create volumes, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises. Azure NetApp Files supports SMB and NFS protocols and can be used for various use cases such as file sharing, home directories, databases, high-performance computing and more. Additionally, it also provides built-in availability, data protection and disaster recovery capabilities.
## High performance
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Azure NetApp Files customer-managed keys is supported for the following regions:
* Japan East * Japan West * Korea Central
+* Korea South
* North Central US * North Europe * Norway East
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* Australia East * Australia Southeast * Brazil South
+* Brazil Southeast
* Canada Central * Canada East * Central India
Standard storage with cool access is supported for the following regions:
* France Central * North Central US * North Europe
+* Southeast Asia
* Switzerland North * Switzerland West * UAE North
+* West US
## Effects of cool access on data
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
ms.assetid:
na-+ Last updated 01/13/2023
azure-netapp-files Terraform Manage Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/terraform-manage-volume.md
ms.assetid:
na+ Last updated 12/20/2023
The following instructions are a high-level overview of the steps required to up
* [Update Terraform-Managed Azure NetApp Files Volume Network Feature from Basic to Standard](configure-network-features.md#update-terraform-managed-azure-netapp-files-volume-from-basic-to-standard) * [Populate Availability Zone for Terraform-Managed Azure NetApp Files Volume](manage-availability-zone-volume-placement.md#populate-availability-zone-for-terraform-managed-volumes)
-* [Managing Azure NetApp Files preview features with Terraform Cloud and AzAPI Provider](https://techcommunity.microsoft.com/t5/azure-architecture-blog/managing-azure-netapp-files-preview-features-with-terraform/ba-p/3657714)
+* [Managing Azure NetApp Files preview features with Terraform Cloud and AzAPI Provider](https://techcommunity.microsoft.com/t5/azure-architecture-blog/managing-azure-netapp-files-preview-features-with-terraform/ba-p/3657714)
azure-relay Relay Hybrid Connections Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-java-get-started.md
Title: Azure Relay Hybrid Connections - HTTP requests in Java
description: Write a Java console application for Azure Relay Hybrid Connections HTTP requests. Last updated 01/04/2024-+ # Get started with Relay Hybrid Connections HTTP requests in Java
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Understand the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 12/20/2023 Last updated : 1/8/2024
Azure VMware Solution monitors the following conditions on the host:
> [!NOTE] > Azure VMware Solution tenant admins must not edit or delete the previously defined VMware vCenter Server alarms because they are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
-## Backup and restoration
+## Backup and restore
-Private cloud vCenter Server and NSX-T Data Center configurations are on an hourly backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
+Azure VMware Solution private cloud vCenter Server, NSX-T Data Center, and HCX Manager (if enabled) configurations are on a daily backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
Azure VMware Solution continuously monitors the health of both the physical underlay and the VMware Solution components. When Azure VMware Solution detects a failure, it takes action to repair the failed components.
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
Title: Connect multiple Azure VMware Solution private clouds in the same region
description: Learn how to create a network connection between two or more Azure VMware Solution private clouds located in the same region. Previously updated : 12/22/2023 Last updated : 1/8/2024
The **AVS Interconnect** feature lets you create a network connection between two or more Azure VMware Solution private clouds located in the same region. It creates a routing link between the management and workload networks of the private clouds to enable network communication between the clouds.
-You can connect a private cloud to multiple private clouds, and the connections are nontransitive. For example, if _private cloud 1_ is connected to _private cloud 2_, and _private cloud 2_ is connected to _private cloud 3_, private clouds 1 and 3 wouldn't communicate until they were directly connected.
+You can connect a private cloud to multiple private clouds, and the connections are nontransitive. For example, if _private cloud A_ is connected to _private cloud B_, and _private cloud B_ is connected to _private cloud C_, private clouds A and B wouldn't communicate until they were directly connected.
You can only connect private clouds in the same region. To connect private clouds in different regions, [use ExpressRoute Global Reach](tutorial-expressroute-global-reach-private-cloud.md) to connect them in the same way you connect your private cloud to your on-premises circuit.
The Azure VMware Solution Interconnect feature is available in all regions.
- Routed IP address space in each cloud is unique and doesn't overlap >[!NOTE]
->The **AVS interconnect** feature doesn't check for overlapping IP space the way native Azure vNet peering does before creating the peering. Therefore, it's your responsibility to ensure that there isn't overlap between the private clouds.
+>The **AVS Interconnect** feature doesn't check for overlapping IP space the way native Azure vNet peering does before creating the peering. Therefore, it's your responsibility to ensure that there isn't overlap between the private clouds.
>
->In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX-T Data Center T0 gateway on each private cloud.
-
+>In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX-T Data Center T0 gateway on each private cloud.
## Add connection between private clouds
azure-vmware Create Placement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/create-placement-policy.md
Title: Create placement policy
description: Learn how to create a placement policy in Azure VMware Solution to control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. Previously updated : 12/07/2023 Last updated : 1/8/2024 #Customer intent: As an Azure service administrator, I want to control the placement of virtual machines on hosts within a cluster in my private cloud.
In Azure VMware Solution, clusters in a private cloud are a managed resource. As a result, the CloudAdmin role can't make certain changes to the cluster from the vSphere Client, including the management of Distributed Resource Scheduler (DRS) rules.
-The placement policy feature is available in all Azure VMware Solution regions.
-Placement policies let you control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal.
-When you create a placement policy, it includes a DRS rule in the specified vSphere cluster.
-It also includes other logic for interoperability with Azure VMware Solution operations.
+The placement policy feature is available in all Azure VMware Solution regions. Placement policies let you control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. When you create a placement policy, it includes a DRS rule in the specified vSphere cluster. It also includes other logic for interoperability with Azure VMware Solution operations.
A placement policy has at least five required components:
A placement policy has at least five required components:
- **Virtual machine** - Defines the VMs and hosts for the policy. Depending on the type of rule you create, your policy might require you to specify some number of VMs and hosts. For more information, see [Placement policy types](#placement-policy-types). - ## Prerequisite You must have _Contributor_ level access to the private cloud to manage placement policies. - ## Placement policy types ### VM-VM policies
The assignment of hosts isn't required or permitted for this policy type.
- **VM-VM Anti-Affinity** policies instruct DRS to try keeping the specified VMs apart from each other on separate hosts. It's useful in availability scenarios where a problem with one host doesn't affect multiple VMs within the same policy. - ### VM-Host policies **VM-Host** policies specify if selected VMs can run on selected hosts. To avoid interference with platform-managed operations such as host maintenance mode and host replacement, **VM-Host** policies in Azure VMware Solution are always preferential (also known as "should" rules). Accordingly, **VM-Host** policies [may not be honored in certain scenarios](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.resmgmt.doc/GUID-793013E2-0976-43B7-9A00-340FA76859D0.html). For more information, see [Monitor the operation of a policy](#monitor-the-operation-of-a-policy).
In addition to choosing a name and cluster for the policy, a **VM-Host** policy
- **VM-Host Anti-Affinity** policies instruct DRS to try running the specified VMs on hosts other than the ones defined. - ## Considerations
-### Cluster scale in
+### Cluster scale-in
Azure VMware Solution attempts to prevent certain DRS rule violations from occurring when performing cluster scale-in operations.
You can't remove the last host from a VM-Host policy. However, if you need to re
You can't have a VM-VM Anti Affinity policy with more VMs than the number of hosts in a cluster. If removing a host results in fewer hosts in the cluster than VMs, you receive an error preventing the operation. You can remediate it by first removing VMs from the rule and then removing the host from the cluster. - ### Rule conflicts If DRS rule conflicts are detected when you create a VM-VM policy, it results in that policy being created in a disabled state following standard [VMware DRS Rule behavior](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.resmgmt.doc/GUID-69C738B6-5FC8-4189-9CB5-DD90A5A05979.html). For more information on viewing rule conflicts, see [Monitor the operation of a policy](#monitor-the-operation-of-a-policy). -- ## Create a placement policy There's no defined limit to the number of policies that you create. However, the more placement constraints you create, the more challenging it is for vSphere DRS to effectively move virtual machines within the cluster and provide the resources needed by the workloads.
Make sure to review the requirements for the [policy type](#placement-policy-typ
>You may also select the Cluster from the Placement Policy overview pane and then select **Create**. > -- 1. Provide a descriptive name, select the policy type, and select the cluster where the policy is created. Then select **Enabled**. >[!WARNING] >If you disable the policy, then the policy and the underlying DRS rule are created, but the policy actions are ignored until you enable the policy. - 1. If you selected **VM-Host affinity** or **VM-Host anti-affinity** as the type, select **+ Add hosts** and the hosts to include in the policy. You can select multiple hosts. >[!NOTE]
Make sure to review the requirements for the [policy type](#placement-policy-typ
> 1. Select **+ Add virtual machine** and the VMs to include in the policy. You can select multiple VMs.-
-
+
>[!NOTE] >The select hosts pane shows how many VM-Host policies are associated with the host and the total number of VMs contained in those associated policies.
Make sure to review the requirements for the [policy type](#placement-policy-typ
:::image type="content" source="media/placement-policies/create-placement-policy-8.png" alt-text="Screenshot showing the placement policy as Enabled after it gets created." lightbox="media/placement-policies/create-placement-policy-8.png"::: - ## Edit a placement policy You can change the state of a policy, add a new resource, or unassign an existing resource.
You can change the state of a policy to **Enabled** or **Disabled**.
1. Review the changes and select **Update policy**. If you want to make changes, select **Back: Basics**. - ### Update the resources in a policy You can add new resources, such as a VM or a host, to a policy or remove existing ones.
To add a new resource, select **Edit virtual machine** or **Edit host**, select
1. Review the changes and select **Update policy**. If you want to make changes, select **Back : Basics**. - ## Delete a policy You can delete a placement policy and its corresponding DRS rule.
Manual vMotion of the VM and automatic initial placement of the VM continues to
### Are placement policies the same as DRS affinity rules? Yes, and no. While vSphere DRS implements the current set of policies, we simplified the experience. Modifying VM groups and Host groups are a cumbersome operation, especially as hosts are ephemeral in nature and could be replaced in a cloud environment. As hosts are replaced in the vSphere inventory in an on-premises environment, the vSphere admin must modify the host group to ensure that the desired VM-Host placement constraints remain in effect. Placement policies in Azure VMware Solution update the Host groups when a host is rotated or changed. Similarly, if you scale in a cluster, the Host Group is automatically updated, as applicable. The automatic update eliminates the overhead of managing the Host Groups for the customer.
+### As this is an existing functionality available in vCenter Server, why can't I use it directly?
-### As this is an existing functionality available in vCenter, why can't I use it directly?
+Azure VMware Solution provides a private cloud in Azure. In this managed VMware solution infrastructure, Microsoft manages the clusters, hosts, datastores, and distributed virtual switches in the private cloud. At the same time, the tenant is responsible for managing the workloads deployed on the private cloud. As a result, the tenant administering the private cloud doesn't have the [same set of privileges](concepts-identity.md) as available to the VMware solution administrator in an on-premises deployment.
-Azure VMware Solution provides a VMware private cloud in Azure. In this managed VMware infrastructure, Microsoft manages the clusters, hosts, datastores, and distributed virtual switches in the private cloud. At the same time, the tenant is responsible for managing the workloads deployed on the private cloud. As a result, the tenant administering the private cloud doesn't have the [same set of privileges](concepts-identity.md) as available to the VMware administrator in an on-premises deployment.
-
-Further, the lack of the desired granularity in the vSphere privileges presents some challenges when managing the placement of the workloads on the private cloud. For example, vSphere DRS rules commonly used on-premises to define affinity and anti-affinity rules can't be used as-is in an Azure VMware Solution environment, as some of those rules can block day-to-day operation the private cloud. Placement Policies provides a way to define those rules using the Azure VMware Solution portal, thereby circumventing the need to use DRS rules. Coupled with a simplified experience, Placement Policies ensure the rules don't impact the day-to-day infrastructure maintenance and operation activities.
+Further, the lack of the desired granularity in the vSphere privileges presents some challenges when managing the placement of the workloads on the private cloud. For example, vSphere DRS rules commonly used on-premises to define affinity and anti-affinity rules can't be used as-is in an Azure VMware Solution environment, as some of those rules can block day-to-day operation the private cloud. Placement Policies provides a way to define those rules using the Azure portal, thereby circumventing the need to use DRS rules. Coupled with a simplified experience, Placement Policies ensure the rules don't impact the day-to-day infrastructure maintenance and operation activities.
### What is the difference between the VM-Host affinity policy and Restrict VM movement?
The VM-Host **MUST** rules aren't supported because they block maintenance opera
VM-Host **SHOULD** rules are preferential rules, where vSphere DRS tries to accommodate the rules to the extent possible. Occasionally, vSphere DRS may vMotion VMs subjected to the VM-Host **SHOULD** rules to ensure that the workloads get the resources they need. It's a standard vSphere DRS behavior, and the Placement policies feature doesn't change the underlying vSphere DRS behavior. If you create conflicting rules, those conflicts can show up on the vCenter Server, and the newly defined rules might not take effect. It's a standard vSphere DRS behavior, the logs for which can be observed in the vCenter Server.-
backup Backup Azure Database Postgresql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-support-matrix.md
You can use [Azure Backup](./backup-overview.md) to protect Azure Database for P
## Supported regions
-Azure Database for PostgreSQL server backup is available in the following regions:
-
-East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, Brazil South, Canada Central, North Europe, West Europe, UK South, UK West, Germany West Central, Switzerland North, Switzerland West, East Asia, Southeast Asia, Japan East, Japan West, Korea Central, Korea South, India Central, Australia East, Australia Central, Australia Central 2, UAE North
+Azure Database for PostgreSQL server backup is available in all regions, except for Germany Central (Sovereign), Germany Northeast (Sovereign) and China regions.
## Support scenarios
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
# What are the comparisons between Azure CDN product features?
-Azure Content Delivery Network (CDN) includes four products:
+Azure Content Delivery Network (CDN) includes three products:
* **Azure CDN Standard from Microsoft** * **Azure CDN Standard from Edgio (formerly Verizon)**
communication-services Call Automation Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/call-automation-ai.md
Last updated 12/08/2023
-+ zone_pivot_groups: acs-js-csharp-java-python
communication-services Meeting Interop Features Inline Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-inline-image.md
Last updated 03/27/2023
-+ zone_pivot_groups: acs-js-csharp
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Last updated 10/24/2023 -+ # Azure Container Apps ARM and YAML template specifications
container-apps Enable Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/enable-dapr.md
description: Learn more about enabling Dapr on your Azure Container App service
-+ Last updated 12/18/2023
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
# Health probes in Azure Container Apps
-Azure Container Apps Health probes are based on [Kubernetes health probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). These probes allow the Container Apps runtime to regularly inspect the status of your container apps.
+Azure Container Apps health probes allow the Container Apps runtime to regularly inspect the status of your container apps.
You can set up probes using either TCP or HTTP(S) exclusively.
container-registry Container Registry Auth Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-kubernetes.md
Title: Authenticate with an Azure container registry using a Kubernetes pull secret description: Learn how to provide a Kubernetes cluster with access to images in your Azure container registry by creating a pull secret using a service principal -+
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md
Title: Authenticate with service principal description: Provide access to images in your private container registry by using a Microsoft Entra service principal. -+
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
Title: Purge tags and manifests
description: Use a purge command to delete multiple tags and manifests from an Azure container registry based on age and a tag filter, and optionally schedule purge operations. + Last updated 10/31/2023
container-registry Container Registry Configure Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-configure-conditional-access.md
Title: Configure conditional access to your Azure Container Registry.
description: Learn how to configure conditional access to your registry by using Azure CLI and Azure portal. + Last updated 11/02/2023- # Conditional Access policy for Azure Container Registry
Create a Conditional Access policy and assign your test group of users as follow
> [!div class="nextstepaction"] > [Azure Policy definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md). >[Common access concerns that Conditional Access policies can help with](../active-directory/conditional-access/concept-conditional-access-policy-common.md).
-> [Conditional Access policy components](../active-directory/conditional-access/concept-conditional-access-policies.md).
+> [Conditional Access policy components](../active-directory/conditional-access/concept-conditional-access-policies.md).
container-registry Container Registry Disable Authentication As Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-disable-authentication-as-arm.md
Title: Disable authentication as ARM template
description: "Disabling azureADAuthenticationAsArmPolicy will force the registry to use ACR audience token." + Last updated 10/31/2023- # Disable authentication as ARM template
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-access.md
Related links:
* [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md) * [HTTP/HTTPS proxy configuration](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy)
-* [Geo-replicationin Azure Container Registry](container-registry-geo-replication.md)
+* [Geo-replication in Azure Container Registry](container-registry-geo-replication.md)
* [Monitor Azure Container Registry](monitor-service.md) ### Configure public access to registry
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
The actual minimum RU/s may vary depending on your account configuration. You ca
#### Minimum throughput on container
-To estimate the minimum throughput required of a container with manual throughput, find the maximum of:
+**Manual throughput**
+
+To estimate the minimum RU/s required of a container with manual throughput, find the maximum of:
* 400 RU/s * Current storage in GB * 1 RU/s
To estimate the minimum throughput required of a container with manual throughpu
For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 1 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 2000 GB. The minimum RU/s is now `MAX(400, 2000 * 1 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
+**Autoscale throughput**
+
+To estimate the minimum autoscale max RU/s required of a container with autoscale throughput, find the maximum of:
+
+* 1000 RU/s
+* Current storage in GB * 10 RU/s
+* Highest RU/s ever provisioned on the container / 10
+
+For example, you have a container provisioned with 1000 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum max RU/s is now `MAX(1000, 20 * 10 RU/s per GB, 50,000 RU/s / 10)` = 5000 RU/s. Over time, the storage grows to 2000 GB. The minimum max RU/s is now `MAX(1000, 2000 * 10 RU/s per GB, 50,000 / 10)` = 20,000 RU/s.
+ #### Minimum throughput on shared throughput database
-To estimate the minimum throughput required of a shared throughput database with manual throughput, find the maximum of:
+**Manual throughput**
+
+To estimate the minimum RU/s required of a shared throughput database with manual throughput, find the maximum of:
* 400 RU/s * Current storage in GB * 1 RU/s * Highest RU/s ever provisioned on the database / 100 * 400 + MAX(Container count - 25, 0) * 100 RU/s
-For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 1 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
+For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 1 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
-In summary, here are the minimum provisioned RU limits when using manual throughput.
+**Autoscale throughput**
-| Resource | Limit |
-| | |
-| Minimum RUs per container ([dedicated throughput provisioned mode with manual throughput](./resource-model.md#azure-cosmos-db-containers)) | 400 |
-| Minimum RUs per database ([shared throughput provisioned mode with manual throughput](./resource-model.md#azure-cosmos-db-containers)) | 400 RU/s for first 25 containers. |
+To estimate the minimum autoscale max RU/s required of a shared throughput database with autoscale throughput, find the maximum of:
+
+* 1000 RU/s
+* Current storage in GB * 10 RU/s
+* Highest RU/s ever provisioned on the database / 10
+* 1000 + MAX(Container count - 25, 0) * 1000 RU/s
+
+For example, you have a database provisioned with 1000 RU/s, 15 GB of storage, and 10 containers. The minimum max RU/s for autoscale database is `MAX(1000, 15 * 10 RU/s per GB, 1000 / 10, 1000 + 0 )` = 1000 RU/s. If there were 30 containers in the database, the minimum max RU/s would be `1000 + MAX(30 - 25, 0) * 1000 RU/s` = 5000 RU/s.
+
+
+In summary, here are the minimum provisioned RU limits when using provisioned throughput.
+
+| Provisioning Type | Resource | Limit |
+| | | |
+| Manual throughput | Minimum RUs per container ([dedicated throughput provisioned mode with manual throughput](./set-throughput.md#set-throughput-on-a-container)) | 400 |
+| Manual throughput | Minimum RUs per database ([shared throughput provisioned mode with manual throughput](./set-throughput.md#set-throughput-on-a-database) | 400 RU/s for first 25 containers. |
+| Autoscale throughput | Minimum max RUs per container ([dedicated throughput provisioned mode with autoscale throughput](./provision-throughput-autoscale.md#how-autoscale-provisioned-throughput-works)) | 1000 |
+| Autoscale throughput | Minimum max RUs per database ([shared throughput provisioned mode with autoscale throughput](./provision-throughput-autoscale.md#how-autoscale-provisioned-throughput-works)) | 1000 RU/s for first 25 containers. |
Azure Cosmos DB supports programmatic scaling of throughput (RU/s) per container or database via the SDKs or portal.
cosmos-db Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-release-notes.md
Last updated 09/11/2023
The Azure Cosmos DB emulator is updated at a regular cadence with release notes provided in this article. > [!div class="nextstepaction"]
-> [Download latest version (``2.14.12``)](https://aka.ms/cosmosdb-emulator)
+> [Download latest version (``2.14.16``)](https://aka.ms/cosmosdb-emulator)
## Supported versions Only the most recent version of the Azure Cosmos DB emulator is actively supported.
-## Latest version ``2.14.12``
+## Latest version ``2.14.16``
-> *Released March 20, 2023*
+> *Released January 8, 2024*
-- This release fixes an issue impacting Gremlin and Table endpoint API types. Prior to this fix a client application fails with a 500 status code when trying to connect to the public emulator's endpoint.
+- This release fixes an issue which was causing emulator to bind with `loopback` instead of `public interface` even after passing /AllowNetworkAccess command line option.
## Previous releases > [!WARNING] > Previous versions of the emulator are not supported by the product group.
+### ``2.14.12`` (March 20, 2023)
+
+- This release fixes an issue impacting Gremlin and Table endpoint API types. Prior to this fix a client application fails with a 500 status code when trying to connect to the public emulator's endpoint.
+ ### ``2.14.11`` (January 27, 2023) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Previously updated : 03/24/2022 Last updated : 01/05/2024
The pricing model of Azure Cosmos DB simplifies the cost management and planning
- **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations. -- **Storage**: You're billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You're billed only for the storage you consume.
+- **Storage**: You're billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You're billed only for the storage you consume. The smallest amount of data billed for any non empty container is 1 GB.
The pricing model in Azure Cosmos DB is consistent across all APIs. For more information, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/), [Understanding your Azure Cosmos DB bill](understand-your-bill.md) and [How Azure Cosmos DB pricing model is cost-effective for customers](total-cost-ownership.md).
cosmos-db How To Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-private-link.md
-
- - ignite-2023
+ Last updated 11/01/2023 # CustomerIntent: As a security administrator, I want to use Azure Private Link so that I can ensure that database connections occur over privately-managed virtual network endpoints.
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-java.md
Currently, the bulk executor library is supported only by Azure Cosmos DB for No
- On Ubuntu, you can run `apt-get install maven` to install Maven.
-* Create an Azure Cosmos DB for NoSQL account by using the steps described in the [create database account](quickstart-java.md#create-a-database-account) section of the Java quickstart article.
+* Create an Azure Cosmos DB for NoSQL account by using the steps described in the [create database account](quickstart-java.md) section of the Java quickstart article.
## Clone the sample application
cosmos-db Client Metrics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/client-metrics-java.md
+
+ Title: Use Micrometer metrics for Java SDK in Azure Cosmos DB
+description: Learn how to consume Micrometer metrics in the Java SDK for Azure Cosmos DB.
+++++ Last updated : 12/14/2023++
+# Micrometer metrics for Java
+
+The [Java SDK for Azure Cosmos DB](samples-java.md) implements client metrics using [Micrometer](https://micrometer.io/) for instrumentation in popular observability systems like [Prometheus](https://prometheus.io/). This article includes instructions and code snippets for scraping metrics into Prometheus, taken from [this sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
+
+## Consume metrics from Prometheus
+
+You can download prometheus from [here](https://prometheus.io/download/). To consume Micrometer metrics in the Java SDK for Azure Cosmos DB using Prometheus, first ensure you have imported the required libraries for registry and client:
+
+```xml
+<dependency>
+ <groupId>io.micrometer</groupId>
+ <artifactId>micrometer-registry-prometheus</artifactId>
+ <version>1.6.6</version>
+</dependency>
+
+<dependency>
+ <groupId>io.prometheus</groupId>
+ <artifactId>simpleclient_httpserver</artifactId>
+ <version>0.5.0</version>
+</dependency>
+```
+
+In your application, provide the prometheus registry to the telemetry config. Notice that you can set various diagnostic thresholds, which will help to limit metrics consumed to the ones you are most interested in:
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/prometheus/async/CosmosClientMetricsQuickStartAsync.java?name=ClientMetricsConfig)]
+
+Start local HttpServer server to expose the meter registry metrics to Prometheus:
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/prometheus/async/CosmosClientMetricsQuickStartAsync.java?name=PrometheusTargetServer)]
+
+Ensure you pass `clientTelemetryConfig` when creating your `CosmosClient`:
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/prometheus/async/CosmosClientMetricsQuickStartAsync.java?name=CosmosClient)]
++
+When adding the endpoint for your application client to `prometheus.yml`, add the domain name and port to "targets". For example, if prometheus is running on the same server as your app client, you can add `localhost:8080` to `targets` as below:
+
+```yml
+scrape_configs:
+ # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
+ - job_name: "prometheus"
+
+ # metrics_path defaults to '/metrics'
+ # scheme defaults to 'http'.
+
+ static_configs:
+ - targets: ["localhost:9090", "localhost:8080"]
+```
+
+Now you can consume metrics from Prometheus:
+++
+## Next steps
+
+- [Monitoring Azure Cosmos DB data reference](../monitor-reference.md)
+- [Monitoring Azure resources with Azure Monitor](../../azure-monitor//essentials//monitor-azure-resource.md)
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/distribute-throughput-across-partitions.md
description: Learn how to redistribute throughput across partitions (preview)
-+ Last updated 12/18/2023
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
Update your Azure Cosmos DB account to enable "Delete by partition key" feature
$capabilities = ($cosmosdb | ConvertFrom-Json).capabilities ``` - Step 3: Add "Delete items by partition key" capability in the list of capabilities if it doesn't exist already.
- >!Note
- The list of capabilities must always specify all capabilities that you want to enable, inclusively. This includes capabilities that are already enabled for the account that you want to keep.
+ > [!NOTE]
+ > The list of capabilities must always specify all capabilities that you want to enable, inclusively. This includes capabilities that are already enabled for the account that you want to keep.
```azurecli-interactive $capabilities += $DeleteByPk
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
Title: Quickstart - Client library for .NET
+ Title: Quickstart - .NET client library
-description: Deploy a .NET web application to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
+description: Deploy a .NET web application that uses the client library to interact with Azure Cosmos DB for NoSQL data in this quickstart.
+ ms.devlang: csharp- Previously updated : 10/24/2023-
-zone_pivot_groups: azure-cosmos-db-quickstart-path
-# CustomerIntent: As a developer, I want to learn the basics of the .NET client library so that I can build applications with Azure Cosmos DB for NoSQL.
+ Last updated : 01/08/2024
+zone_pivot_groups: azure-cosmos-db-quickstart-env
+# CustomerIntent: As a developer, I want to learn the basics of the .NET library so that I can build applications with Azure Cosmos DB for NoSQL.
-# Quickstart: Azure Cosmos DB for NoSQL client library for .NET
+# Quickstart: Azure Cosmos DB for NoSQL library for .NET
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to deploy a sample application and explore the code. In this quickstart, you use the Azure Developer CLI (`azd`) and the `Microsoft.Azure.Cosmos` library to connect to a newly created Azure Cosmos DB for NoSQL account.
+Get started with the Azure Cosmos DB for NoSQL client library for .NET to query data in your containers and perform common operations on individual items. Follow these steps to deploy a minimal solution to your environment using the Azure Developer CLI.
[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) ## Prerequisites -- An Azure account with an active subscription.
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-- [.NET 8.0](https://dotnet.microsoft.com/download/dotnet/8.0)
+## Setting up
+Deploy this project's development container to your environment. Then, use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and deploy a containerized sample application. The sample application uses the client library to manage, create, read, and query sample data.
-## Deploy the Azure Developer CLI template
-Use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and set up an Azure Container Apps web application. The sample application uses the client library for .NET to manage resources.
-
-1. Start in an empty directory in the Azure Cloud Shell.
-
- > [!TIP]
- > We recommend creating a new uniquely named directory within the fileshare folder (`~/clouddrive`).
- >
- > For example, this command will create a new directory and navigate to that directory:
- >
- > ```azurecli-interactive
- > mkdir ~/clouddrive/cosmos-db-nosql-dotnet-quickstart
- >
- > cd ~/clouddrive/cosmos-db-nosql-dotnet-quickstart
- > ```
-
-1. Initialize the Azure Developer CLI using `azd init` and the `cosmos-db-nosql-dotnet-quickstart` template.
-
- ```azurecli-interactive
- azd init --template cosmos-db-nosql-dotnet-quickstart
- ```
-
-1. During initialization, configure a unique environment name.
-
- > [!NOTE]
- > The environment name will also be used as the target resource group name.
-
-1. Deploy the Azure Cosmos DB account and other resources for this quickstart with `azd provision`.
-
- ```azurecli-interactive
- azd provision
- ```
-
-1. During the provisioning process, select your subscription and desired location. Wait for the provisioning process to complete. The process can take **approximately five minutes**.
-
-1. Once the provisioning of your Azure resources is done, a link to the running web application is included in the output.
-
- ```output
- View the running web application in Azure Container Apps:
- <https://container-app-39423723798.redforest-xz89v7c.eastus.azurecontainerapps.io>
-
- SUCCESS: Your application was provisioned in Azure in 5 minutes 0 seconds.
- ```
-
-1. Use the link in the console to navigate to your web application in the browser.
-
- :::image type="content" source="media/quickstart-dotnet/web-application.png" alt-text="Screenshot of the running web application.":::
+[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/azure-samples/cosmos-db-nosql-dotnet-quickstart?template=false&quickstart=1&azure-portal=true)
::: zone-end -
-## Get the application code
-
-Use the Azure Developer CLI (`azd`) to get the application code. The sample application uses the client library for .NET to manage resources.
-
-1. Start in an empty directory.
-
-1. Initialize the Azure Developer CLI using `azd init` and the `cosmos-db-nosql-dotnet-quickstart` template.
-
- ```azurecli
- azd init --template cosmos-db-nosql-dotnet-quickstart
- ```
-
-1. During initialization, configure a unique environment name.
-
- > [!NOTE]
- > If you decide to deploy this application to Azure in the future, the environment name will also be used as the target resource group name.
-
-## Create the API for NoSQL account
-
-Use the Azure CLI (`az`) to create an API for NoSQL account. You can choose to create an account in your existing subscription, or try a free Azure Cosmos DB account.
-
-### [Try Azure Cosmos DB free](#tab/try-free)
-
-1. Navigate to the **Try Azure Cosmos DB free** homepage: <https://cosmos.azure.com/try/>
-
-1. Sign-in using your Microsoft account.
-
-1. In the list of APIs, select the **Create** button for the **API for NoSQL**.
-
-1. Navigate to the newly created account by selecting **Open in portal**.
-
-1. Record the account and resource group names for the API for NoSQL account. You use these values in later steps.
-
-> [!IMPORTANT]
-> If you are using a free account, you might need to change the default subscription in Azure CLI to the subscription ID used for the free account.
->
-> ```azurecli
-> az account set --subscription <subscription-id>
-> ```
-
-### [Azure subscription](#tab/azure-subscription)
-
-1. If you haven't already, sign in to the Azure CLI using the `az login` command.
-
-1. Use `az group create` to create a new resource group in your subscription.
-
- ```azurecli
- az group create \
- --name <resource-group-name> \
- --location <location>
- ```
-
-1. Use the `az cosmosdb create` command to create a new API for NoSQL account with default settings.
-
- ```azurecli
- az cosmosdb create \
- --resource-group <resource-group-name> \
- --name <account-name> \
- --locations regionName=<location>
- ```
---
-## Create the database and container
-
-Use the Azure CLI to create the `cosmicworks` database and `products` container for the quickstart.
-1. Create a new database with `az cosmosdb sql database create`. Set the name of the database to `comsicworks` and use autoscale throughput with a maximum of **1,000** RU/s.
+[![Open in Dev Container](https://img.shields.io/static/v1?style=for-the-badge&label=Dev+Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/azure-samples/cosmos-db-nosql-dotnet-quickstart)
- ```azurecli
- az cosmosdb sql database create \
- --resource-group <resource-group-name> \
- --account-name <account-name> \
- --name "cosmicworks" \
- --max-throughput 1000
- ```
-
-1. Create a container named `products` within the `cosmicworks` database using `az cosmosdb sql container create`. Set the partition key path to `/category`.
-
- ```azurecli
- az cosmosdb sql container create \
- --resource-group <resource-group-name> \
- --account-name <account-name> \
- --database-name "cosmicworks" \
- --name "products" \
- --partition-key-path "/category"
- ```
-## Configure passwordless authentication
-When developing locally with passwordless authentication, make sure the user account that connects to Cosmos DB is assigned a role with the correct permissions to perform data operations. Currently, Azure Cosmos DB for NoSQL doesn't include built-in roles for data operations, but you can create your own using the Azure CLI or PowerShell.
+### Install the client library
-1. Get the API for NoSQL endpoint for the account using `az cosmosdb show`. You'll use this value in the next step.
+The client library is available through NuGet, as the `Microsoft.Azure.Cosmos` package.
- ```azurecli
- az cosmosdb show \
- --resource-group <resource-group-name> \
- --name <account-name> \
- --query "documentEndpoint"
- ```
-
-1. Set the `AZURE_COSMOS_DB_NOSQL_ENDPOINT` environment variable using the .NET secret manager (`dotnet user-secrets`). Set the value to the API for NoSQL account endpoint recorded in the previous step.
+1. Open a terminal and navigate to the `/src/web` folder.
```bash
- dotnet user-secrets set "AZURE_COSMOS_DB_NOSQL_ENDPOINT" "<cosmos-db-nosql-endpoint>" --project ./src/web/Cosmos.Samples.NoSQL.Quickstart.Web.csproj
- ```
-
-1. Create a JSON file named `role-definition.json`. Use this content to configure the role with the following permissions:
-
- - `Microsoft.DocumentDB/databaseAccounts/readMetadata`
- - `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*`
- - `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*`
-
- ```json
- {
- "RoleName": "Write to Azure Cosmos DB for NoSQL data plane",
- "Type": "CustomRole",
- "AssignableScopes": [
- "/"
- ],
- "Permissions": [
- {
- "DataActions": [
- "Microsoft.DocumentDB/databaseAccounts/readMetadata",
- "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*",
- "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*"
- ]
- }
- ]
- }
- ```
-
-1. Create a role using the `az role definition create` command. Name the role `Write to Azure Cosmos DB for NoSQL data plane` and ensure the role is scoped to the account level using `/`. Use the `role-definition.json` file you created in the previous step.
-
- ```azurecli
- az cosmosdb sql role definition create \
- --resource-group <resource-group-name> \
- --account-name <account-name> \
- --body @role-definition.json
- ```
-
-1. When the command is finished, it outputs an object that includes an `id` field. Record the value from the `id` field. You use this value in an upcoming step.
-
- > [!TIP]
- > If you need to get the `id` again, you can use the `az cosmosdb sql role definition list` command:
- >
- > ```azurecli
- > az cosmosdb sql role definition list \
- > --resource-group <resource-group-name> \
- > --account-name <account-name> \
- > --query "[?roleName == 'Write to Azure Cosmos DB for NoSQL data plane'].id"
- > ```
- >
-
-1. For local development, get your currently logged in **service principal id**. Record this value as you'll also use this value in the next step.
-
- ```azurecli
- az ad signed-in-user show --query id
+ cd ./src/web
```
-1. Assign the role definition to your currently logged in user using `az cosmosdb sql role assignment create`.
+1. If not already installed, install the `Microsoft.Azure.Cosmos` package using `dotnet add package`.
- ```azurecli
- az cosmosdb sql role assignment create \
- --resource-group <resource-group-name> \
- --account-name <account-name> \
- --scope "/" \
- --role-definition-id "<your-custom-role-definition-id>" \
- --principal-id "<your-service-principal-id>"
+ ```bash
+ dotnet add package Microsoft.Azure.Cosmos
```
-1. Run the .NET web application.
+1. Also, install the `Azure.Identity` package if not already installed.
```bash
- dotnet run --project ./src/web/Cosmos.Samples.NoSQL.Quickstart.Web.csproj
+ dotnet add package Azure.Identity
```
-1. Use the link in the console to navigate to your web application in the browser.
+1. Open and review the **src/web/Cosmos.Samples.NoSQL.Quickstart.Web.csproj** file to validate that the `Microsoft.Azure.Cosmos` and `Azure.Identity` entries both exist.
- :::image type="content" source="media/quickstart-dotnet/web-application.png" alt-text="Screenshot of the running web application.":::
+## Object model
+| Name | Description |
+| | |
+| <xref:Microsoft.Azure.Cosmos.CosmosClient> | This class is the primary client class and is used to manage account-wide metadata or databases. |
+| <xref:Microsoft.Azure.Cosmos.Database> | This class represents a database within the account. |
+| <xref:Microsoft.Azure.Cosmos.Container> | This class is primarily used to perform read, update, and delete operations on either the container or the items stored within the container. |
+| <xref:Microsoft.Azure.Cosmos.PartitionKey> | This class represents a logical partition key. This class is required for many common operations and queries. |
-## Walk through the .NET library code
+## Code examples
- [Authenticate the client](#authenticate-the-client) - [Get a database](#get-a-database)
When developing locally with passwordless authentication, make sure the user acc
- [Get an item](#read-an-item) - [Query items](#query-items)
-The sample code in the Azure Develop CLI template creates a database named `cosmicworks` with a container named `products`. The `products` container is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-For this sample, the container uses the `/category` property as a logical partition key.
-
-The code blocks used to perform these operations in this sample are included in this section. You can also [browse the entire template's source](https://vscode.dev/github/azure-samples/cosmos-db-nosql-dotnet-quickstart) using Visual Studio Code for the Web.
### Authenticate the client
-Application requests to most Azure services must be authorized. Using the <xref:Azure.Identity.DefaultAzureCredential> class provided by the <xref:Azure.Identity> client library and namespace is the recommended approach for implementing passwordless connections to Azure services in your code.
-
-> [!IMPORTANT]
-> You can also authorize requests to Azure services using passwords, connection strings, or other credentials directly. However, this approach should be used with caution. Developers must be diligent to never expose these secrets in an unsecure location. Anyone who gains access to the password or secret key is able to authenticate. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication.
-`DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime.
+This sample creates a new instance of the `CosmosClient` class and authenticates using a `DefaultAzureCredential` instance.
-The client authentication code for this project is in the `src/web/Program.cs` file.
-
-For example, your app can authenticate using your Visual Studio sign-in credentials when developing locally, and then use a system-assigned managed identity once it has been deployed to Azure. No code changes are required for this transition between environments.
--
-Alternatively, your app can specify a `clientId` with the <xref:Azure.Identity.DefaultAzureCredentialOptions> class to use a user-assigned managed identity locally or in Azure.
- ### Get a database
-The code to access database resources is in the `GenerateQueryDataAsync` method of the `src/web/Pages/Index.razor` file.
-
-Use the <xref:Microsoft.Azure.Cosmos.CosmosClient.GetDatabase%2A> method to return a reference to the specified database.
+Use `client.GetDatabase` to retrieve the existing database named *`cosmicworks`*.
### Get a container
-The code to access container resources is also in the `GenerateQueryDataAsync` method.
-
-The <xref:Microsoft.Azure.Cosmos.Database.GetContainer%2A> returns a reference to the specified container.
+Retrieve the existing *`products`* container using `database.GetContainer`.
### Create an item
-The easiest way to create a new item in a container is to first build a C# class or record type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a `category` field for the partition key, name, quantity, price, and clearance fields.
+Build a C# record type with all of the members you want to serialize into JSON. In this example, the type has a unique identifier, and fields for category, name, quantity, price, and sale.
:::code language="csharp" source="~/cosmos-db-nosql-dotnet-quickstart/src/web/Models/Product.cs" id="model":::
-In the `GenerateQueryDataAsync` method, create an item in the container by calling <xref:Microsoft.Azure.Cosmos.Container.UpsertItemAsync%2A>.
+Create an item in the container using `container.UpsertItem`. This method "upserts" the item effectively replacing the item if it already exists.
### Read an item
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (`id`) and partition key fields. In the SDK, call <xref:Microsoft.Azure.Cosmos.Container.ReadItemAsync%2A> passing in both values to return a deserialized instance of your C# type.
-Still in the `GenerateQueryDataAsync` method, use `ReadItemAsync<Product>` to serialize the item using the `Product` type.
+Perform a point read operation by using both the unique identifier (`id`) and partition key fields. Use `container.ReadItem` to efficiently retrieve the specific item.
### Query items
-After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: `SELECT * FROM products p WHERE p.category = "gear-surf-surfboards"`. This example uses the QueryDefinition type and a parameterized query expression for the partition key filter. Once the query is defined, call <xref:Microsoft.Azure.Cosmos.Container.GetItemQueryIterator%2A> to get a result iterator that manages the pages of results. In the example, the query logic is also in the `GenerateQueryDataAsync` method.
--
-Then, use a combination of `while` and `foreach` loops to retrieve pages of results and then iterate over the individual items.
--
-## Clean up resources
--
-When you no longer need the sample application or resources, remove the corresponding deployment and all resources.
+Perform a query over multiple items in a container using `container.GetItemQueryIterator`. Find all items within a specified category using this parameterized query:
-```azurecli-interactive
-azd down
+```nosql
+SELECT * FROM products p WHERE p.category = @category
``` --
-### [Try Azure Cosmos DB free](#tab/try-free)
-
-1. Navigate to the **Try Azure Cosmos DB free** homepage again: <https://cosmos.azure.com/try/>
-1. Sign-in using your Microsoft account.
+Parse the paginated results of the query by looping through each page of results using `feed.ReadNextAsync`. Use `feed.HasMoreResults` to determine if there are any results left at the start of each loop.
-1. Select **Delete your account**.
-### [Azure subscription](#tab/azure-subscription)
+## Clean up resources
-When you no longer need the API for NoSQL account, you can delete the corresponding resource group. Use the `az group delete` command to delete the resource group.
-```azurecli
-az group delete --name <resource-group-name>
-```
+## Related content
--
+- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Java Quickstart](quickstart-java.md)
+- [Python Quickstart](quickstart-python.md)
+- [Go Quickstart](quickstart-go.md)
## Next step > [!div class="nextstepaction"]
-> [Tutorial: Develop a .NET console application with Azure Cosmos DB for NoSQL](tutorial-dotnet-console-app.md)
+> [Tutorial: Develop a .NET console application](tutorial-dotnet-console-app.md)
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
Title: 'Quickstart: Build a Go app using Azure Cosmos DB for NoSQL account'
-description: Gives a Go code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+ Title: Quickstart - Go client library
+
+description: Deploy a Go web application that uses the client library to interact with Azure Cosmos DB for NoSQL data in this quickstart.
++ ms.devlang: golang-- Previously updated : 3/4/2021-+ Last updated : 01/08/2024
+zone_pivot_groups: azure-cosmos-db-quickstart-env
+# CustomerIntent: As a developer, I want to learn the basics of the Go library so that I can build applications with Azure Cosmos DB for NoSQL.
-# Quickstart: Build a Go application using an Azure Cosmos DB for NoSQL account
+# Quickstart: Azure Cosmos DB for NoSQL library for Go
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
->
-> [!IMPORTANT]
-> The Go SDK for Azure Cosmos DB is currently in beta. This beta is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Get started with the Azure Cosmos DB for NoSQL client library for Go to query data in your containers and perform common operations on individual items. Follow these steps to deploy a minimal solution to your environment using the Azure Developer CLI.
+[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/cosmos/azcosmos) | [Library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/data/azcosmos#readme) | [Package (Go)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos) | [Azure Developer CLI](/azure/developer/azure-developer-cli/overview)
-In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage an Azure Cosmos DB for NoSQL account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
+## Prerequisites
-Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](../introduction.md).
+## Setting up
-## Prerequisites
+Deploy this project's development container to your environment. Then, use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and deploy a containerized sample application. The sample application uses the client library to manage, create, read, and query sample data.
-- An Azure account with an active subscription.
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-- [Go 1.16 or higher](https://golang.org/dl/)-- [Azure CLI](/cli/azure/install-azure-cli)
+[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/azure-samples/cosmos-db-nosql-go-quickstart?template=false&quickstart=1&azure-portal=true)
-## Getting started
-For this quickstart, you'll need to create an Azure resource group and an Azure Cosmos DB account.
-Run the following commands to create an Azure resource group:
+[![Open in Dev Container](https://img.shields.io/static/v1?style=for-the-badge&label=Dev+Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/azure-samples/cosmos-db-nosql-go-quickstart)
-```azurecli
-az group create --name myResourceGroup --location eastus
-```
-Next create an Azure Cosmos DB account by running the following command:
-```
-az cosmosdb create --name my-cosmosdb-account --resource-group myResourceGroup
-```
+### Install the client library
-### Install the package
+The client library is available through Go, as the `azcosmos` package.
-Use the `go get` command to install the [azcosmos](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos) package.
+1. Open a terminal and navigate to the `/src` folder.
-```bash
-go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos
-```
+ ```bash
+ cd ./src
+ ```
+
+1. If not already installed, install the `azcosmos` package using `go install`.
+
+ ```bash
+ go install github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos
+ ```
+
+1. Also, install the `azidentity` package if not already installed.
+
+ ```bash
+ go install github.com/Azure/azure-sdk-for-go/sdk/azidentity
+ ```
+
+1. Open and review the **src/go.mod** file to validate that the `github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos` and `github.com/Azure/azure-sdk-for-go/sdk/azidentity` entries both exist.
-## Key concepts
+## Object model
-* A `Client` is a connection to an Azure Cosmos DB account.
-* Azure Cosmos DB accounts can have multiple `databases`. A `DatabaseClient` allows you to create, read, and delete databases.
-* Database within an Azure Cosmos DB Account can have multiple `containers`. A `ContainerClient` allows you to create, read, update, and delete containers, and to modify throughput provision.
-* Information is stored as items inside containers. And the client allows you to create, read, update, and delete items in containers.
+| Name | Description |
+| | |
+| [`CosmosClient`](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/cosmos/azcosmos#CosmosClient) | This class is the primary client class and is used to manage account-wide metadata or databases. |
+| [`CosmosDatabase`](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/cosmos/azcosmos#CosmosDatabase) | This class represents a database within the account. |
+| [`CosmosContainer`](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/cosmos/azcosmos#CosmosContainer) | This class is primarily used to perform read, update, and delete operations on either the container or the items stored within the container. |
+| [`PartitionKey`](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/cosmos/azcosmos#PartitionKey) | This class represents a logical partition key. This class is required for many common operations and queries. |
## Code examples
-**Authenticate the client**
-
-```go
-var endpoint = "<azure_cosmos_uri>"
-var key = "<azure_cosmos_primary_key"
-
-cred, err := azcosmos.NewKeyCredential(key)
-if err != nil {
- log.Fatal("Failed to create a credential: ", err)
-}
-
-// Create a CosmosDB client
-client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
-if err != nil {
- log.Fatal("Failed to create Azure Cosmos DB client: ", err)
-}
-
-// Create database client
-databaseClient, err := client.NewDatabase("<databaseName>")
-if err != nil {
- log.Fatal("Failed to create database client:", err)
-}
-
-// Create container client
-containerClient, err := client.NewContainer("<databaseName>", "<containerName>")
-if err != nil {
- log.Fatal("Failed to create a container client:", err)
-}
-```
+- [Authenticate the client](#authenticate-the-client)
+- [Get a database](#get-a-database)
+- [Get a container](#get-a-container)
+- [Create an item](#create-an-item)
+- [Get an item](#read-an-item)
+- [Query items](#query-items)
-**Create an Azure Cosmos DB database**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func createDatabase (client *azcosmos.Client, databaseName string) error {
-// databaseName := "adventureworks"
-
- // sets the name of the database
- databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
-
- // creating the database
- ctx := context.TODO()
- databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
- if err != nil {
- log.Fatal(err)
- }
- return nil
-}
-```
-**Create a container**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func createContainer (client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "/customerId"
-
- databaseClient, err := client.NewDatabase(databaseName) // returns a struct that represents a database
- if err != nil {
- log.Fatal("Failed to create a database client:", err)
- }
-
- // Setting container properties
- containerProperties := azcosmos.ContainerProperties{
- ID: containerName,
- PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
- Paths: []string{partitionKey},
- },
- }
-
- // Setting container options
- throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
- options := &azcosmos.CreateContainerOptions{
- ThroughputProperties: &throughputProperties,
- }
-
- ctx := context.TODO()
- containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
- if err != nil {
- log.Fatal(err)
-
- }
- log.Printf("Container [%v] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
-
- return nil
-}
-```
+### Authenticate the client
-**Create an item**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-/*
- item = struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{
- ID: "1",
- CustomerId: "1",
- Title: "Mr",
- FirstName: "Luke",
- LastName: "Hayes",
- EmailAddress: "luke12@adventure-works.com",
- PhoneNumber: "879-555-0197",
- }
-*/
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client: %s", err)
- }
-
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- b, err := json.Marshal(item)
- if err != nil {
- return err
- }
- // setting item options upon creating ie. consistency level
- itemOptions := azcosmos.ItemOptions{
- ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
- }
- ctx := context.TODO()
- itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
-
- if err != nil {
- return err
- }
- log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
-
- return nil
-}
-```
-**Read an item**
-
-```go
-import (
- "context"
- "log"
- "fmt"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("Failed to create a container client: %s", err)
- }
-
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Read an item
- ctx := context.TODO()
- itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- itemResponseBody := struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{}
-
- err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
- if err != nil {
- return err
- }
-
- b, err := json.MarshalIndent(itemResponseBody, "", " ")
- if err != nil {
- return err
- }
- fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
- fmt.Printf("%s\n", b)
-
- log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
-
- return nil
-}
-```
+This sample creates a new instance of `CosmosClient` using `azcosmos.NewClient` and authenticates using a `DefaultAzureCredential` instance.
-**Delete an item**
-
-```go
-import (
- "context"
- "log"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("Failed to create a container client: %s", err)
- }
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Delete an item
- ctx := context.TODO()
- res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
-
- return nil
-}
-```
-## Run the code
+### Get a database
-To authenticate, you need to pass the Azure Cosmos DB account credentials to the application.
+Use `client.NewDatabase` to retrieve the existing database named *`cosmicworks`*.
-Get your Azure Cosmos DB account credentials by following these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+### Get a container
-1. Navigate to your Azure Cosmos DB account.
+Retrieve the existing *`products`* container using `database.NewContainer`.
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You'll add the URI and keys values to an environment variable in the next step.
-After you've copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application.
+### Create an item
-Use the values copied from the Azure portal to set the following environment variables:
+Build a Go type with all of the members you want to serialize into JSON. In this example, the type has a unique identifier, and fields for category, name, quantity, price, and sale.
-# [Bash](#tab/bash)
-```bash
-export AZURE_COSMOS_ENPOINT=<Your_AZURE_COSMOS_URI>
-export AZURE_COSMOS_KEY=<Your_COSMOS_PRIMARY_KEY>
-```
+Create an item in the container using `container.UpsertItem`. This method "upserts" the item effectively replacing the item if it already exists.
-# [PowerShell](#tab/powershell)
-```powershell
-$env:AZURE_COSMOS_ENDPOINT=<Your_AZURE_COSMOS_URI>
-$env:AZURE_COSMOS_KEY=<Your_AZURE_COSMOS_URI>
-```
+### Read an item
-
+Perform a point read operation by using both the unique identifier (`id`) and partition key fields. Use `container.ReadItem` to efficiently retrieve the specific item.
-Create a new Go module by running the following command:
-```bash
-go mod init azcosmos
-```
+### Query items
-```go
-
-package main
-
-import (
- "context"
- "encoding/json"
- "errors"
- "fmt"
- "log"
- "os"
-
- "github.com/Azure/azure-sdk-for-go/sdk/azcore"
- "github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos"
-)
-
-func main() {
- endpoint := os.Getenv("AZURE_COSMOS_ENDPOINT")
- if endpoint == "" {
- log.Fatal("AZURE_COSMOS_ENDPOINT could not be found")
- }
-
- key := os.Getenv("AZURE_COSMOS_KEY")
- if key == "" {
- log.Fatal("AZURE_COSMOS_KEY could not be found")
- }
-
- var databaseName = "adventureworks"
- var containerName = "customer"
- var partitionKey = "/customerId"
-
- item := struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{
- ID: "1",
- CustomerId: "1",
- Title: "Mr",
- FirstName: "Luke",
- LastName: "Hayes",
- EmailAddress: "luke12@adventure-works.com",
- PhoneNumber: "879-555-0197",
- }
-
- cred, err := azcosmos.NewKeyCredential(key)
- if err != nil {
- log.Fatal("Failed to create a credential: ", err)
- }
-
- // Create a CosmosDB client
- client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
- if err != nil {
- log.Fatal("Failed to create Azure Cosmos DB db client: ", err)
- }
-
- err = createDatabase(client, databaseName)
- if err != nil {
- log.Printf("createDatabase failed: %s\n", err)
- }
-
- err = createContainer(client, databaseName, containerName, partitionKey)
- if err != nil {
- log.Printf("createContainer failed: %s\n", err)
- }
-
- err = createItem(client, databaseName, containerName, item.CustomerId, item)
- if err != nil {
- log.Printf("createItem failed: %s\n", err)
- }
-
- err = readItem(client, databaseName, containerName, item.CustomerId, item.ID)
- if err != nil {
- log.Printf("readItem failed: %s\n", err)
- }
-
- err = deleteItem(client, databaseName, containerName, item.CustomerId, item.ID)
- if err != nil {
- log.Printf("deleteItem failed: %s\n", err)
- }
-}
-
-func createDatabase(client *azcosmos.Client, databaseName string) error {
-// databaseName := "adventureworks"
-
- databaseProperties := azcosmos.DatabaseProperties{ID: databaseName}
-
- // This is a helper function that swallows 409 errors
- errorIs409 := func(err error) bool {
- var responseErr *azcore.ResponseError
- return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
- }
- ctx := context.TODO()
- databaseResp, err := client.CreateDatabase(ctx, databaseProperties, nil)
-
- switch {
- case errorIs409(err):
- log.Printf("Database [%s] already exists\n", databaseName)
- case err != nil:
- return err
- default:
- log.Printf("Database [%v] created. ActivityId %s\n", databaseName, databaseResp.ActivityID)
- }
- return nil
-}
-
-func createContainer(client *azcosmos.Client, databaseName, containerName, partitionKey string) error {
-// databaseName = adventureworks
-// containerName = customer
-// partitionKey = "/customerId"
-
- databaseClient, err := client.NewDatabase(databaseName)
- if err != nil {
- return err
- }
-
- // creating a container
- containerProperties := azcosmos.ContainerProperties{
- ID: containerName,
- PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
- Paths: []string{partitionKey},
- },
- }
-
- // this is a helper function that swallows 409 errors
- errorIs409 := func(err error) bool {
- var responseErr *azcore.ResponseError
- return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
- }
-
- // setting options upon container creation
- throughputProperties := azcosmos.NewManualThroughputProperties(400) //defaults to 400 if not set
- options := &azcosmos.CreateContainerOptions{
- ThroughputProperties: &throughputProperties,
- }
- ctx := context.TODO()
- containerResponse, err := databaseClient.CreateContainer(ctx, containerProperties, options)
-
- switch {
- case errorIs409(err):
- log.Printf("Container [%s] already exists\n", containerName)
- case err != nil:
- return err
- default:
- log.Printf("Container [%s] created. ActivityId %s\n", containerName, containerResponse.ActivityID)
- }
- return nil
-}
-
-func createItem(client *azcosmos.Client, databaseName, containerName, partitionKey string, item any) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-
-/* item = struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{
- ID: "1",
- CustomerId: "1",
- Title: "Mr",
- FirstName: "Luke",
- LastName: "Hayes",
- EmailAddress: "luke12@adventure-works.com",
- PhoneNumber: "879-555-0197",
- CreationDate: "2014-02-25T00:00:00",
- }
-*/
- // create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client: %s", err)
- }
-
- // specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- b, err := json.Marshal(item)
- if err != nil {
- return err
- }
- // setting the item options upon creating ie. consistency level
- itemOptions := azcosmos.ItemOptions{
- ConsistencyLevel: azcosmos.ConsistencyLevelSession.ToPtr(),
- }
-
- // this is a helper function that swallows 409 errors
- errorIs409 := func(err error) bool {
- var responseErr *azcore.ResponseError
- return err != nil && errors.As(err, &responseErr) && responseErr.StatusCode == 409
- }
-
- ctx := context.TODO()
- itemResponse, err := containerClient.CreateItem(ctx, pk, b, &itemOptions)
-
- switch {
- case errorIs409(err):
- log.Printf("Item with partitionkey value %s already exists\n", pk)
- case err != nil:
- return err
- default:
- log.Printf("Status %d. Item %v created. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
- }
-
- return nil
-}
-
-func readItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client: %s", err)
- }
-
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Read an item
- ctx := context.TODO()
- itemResponse, err := containerClient.ReadItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- itemResponseBody := struct {
- ID string `json:"id"`
- CustomerId string `json:"customerId"`
- Title string
- FirstName string
- LastName string
- EmailAddress string
- PhoneNumber string
- CreationDate string
- }{}
-
- err = json.Unmarshal(itemResponse.Value, &itemResponseBody)
- if err != nil {
- return err
- }
-
- b, err := json.MarshalIndent(itemResponseBody, "", " ")
- if err != nil {
- return err
- }
- fmt.Printf("Read item with customerId %s\n", itemResponseBody.CustomerId)
- fmt.Printf("%s\n", b)
-
- log.Printf("Status %d. Item %v read. ActivityId %s. Consuming %v Request Units.\n", itemResponse.RawResponse.StatusCode, pk, itemResponse.ActivityID, itemResponse.RequestCharge)
-
- return nil
-}
-
-func deleteItem(client *azcosmos.Client, databaseName, containerName, partitionKey, itemId string) error {
-// databaseName = "adventureworks"
-// containerName = "customer"
-// partitionKey = "1"
-// itemId = "1"
-
- // Create container client
- containerClient, err := client.NewContainer(databaseName, containerName)
- if err != nil {
- return fmt.Errorf("failed to create a container client:: %s", err)
- }
- // Specifies the value of the partiton key
- pk := azcosmos.NewPartitionKeyString(partitionKey)
-
- // Delete an item
- ctx := context.TODO()
-
- res, err := containerClient.DeleteItem(ctx, pk, itemId, nil)
- if err != nil {
- return err
- }
-
- log.Printf("Status %d. Item %v deleted. ActivityId %s. Consuming %v Request Units.\n", res.RawResponse.StatusCode, pk, res.ActivityID, res.RequestCharge)
-
- return nil
-}
+Perform a query over multiple items in a container using `container.NewQueryItemsPager`. Find all items within a specified category using this parameterized query:
+```nosql
+SELECT * FROM products p WHERE p.category = @category
```
-Create a new file named `main.go` and copy the code from the sample section above.
-Run the following command to execute the app:
-```bash
-go run main.go
-```
+Parse the paginated results of the query by looping through each page of results using `pager.NextPage`. Use `pager.More` to determine if there are any results left at the start of each loop.
+ ## Clean up resources +
+## Related content
-## Next steps
+- [.NET Quickstart](quickstart-dotnet.md)
+- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Java Quickstart](quickstart-java.md)
+- [Python Quickstart](quickstart-python.md)
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a database, container, and an item entry. Now import more data to your Azure Cosmos DB account.
+## Next step
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+> [!div class="nextstepaction"]
+> [Go package](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos)
cosmos-db Quickstart Java Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java-spring-data.md
- Title: Quickstart - Use Spring Data Azure Cosmos DB v3 to create a document database using Azure Cosmos DB
-description: This quickstart presents a Spring Data Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
---- Previously updated : 02/22/2023-----
-# Quickstart: Build a Spring Data Azure Cosmos DB v3 app to manage Azure Cosmos DB for NoSQL data
--
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
-
-In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub.
-
-First, you create an Azure Cosmos DB for NoSQL account using the Azure portal. Alternately, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You can then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Azure Cosmos DB account by using the Spring Boot application.
-
-Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-> [!IMPORTANT]
-> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find release notes for version 2 at [Spring Data Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources](sdk-java-spring-data-v2.md).
->
-> Spring Data Azure Cosmos DB supports only the API for NoSQL.
->
-> See the following articles for information about Spring Data on other Azure Cosmos DB APIs:
->
-> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
-> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
-
-## Prerequisites
-
-* An Azure account with an active subscription.
- * No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-* [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Set the `JAVA_HOME` environment variable to the JDK install folder.
-* A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.
-* [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
-
-## Introductory notes
-
-*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the following diagram:
--
-For more information about databases, containers, and items, see [Azure Cosmos DB resource model](../resource-model.md). A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
-
-The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. You can select provisioned throughput at per-container granularity or per-database granularity. However, you should prefer container-level throughput specification. For more information, see [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md).
-
-As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*. You must choose one field in your documents to be the partition key, which maps each document to a partition.
-
-The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values. For this reason, you should choose a partition key that's relatively random or evenly distributed. Otherwise, you get *hot partitions* and *cold partitions*, which see substantially more or fewer requests. For information on avoiding this condition, see [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
-
-## Create a database account
-
-Before you can create a document database, you need to create an API for NoSQL account with Azure Cosmos DB.
--
-## Add a container
--
-<a id="add-sample-data"></a>
-## Add sample data
--
-## Query your data
--
-## Clone the sample application
-
-Now let's switch to working with code. Let's clone an API for NoSQL app from GitHub, set the connection string, and run it.
-
-Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
-```bash
-git clone https://github.com/Azure-Samples/azure-spring-boot-samples.git
-```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app](#run-the-app).
-
-### [Passwordless (Recommended)](#tab/passwordless)
-
-In this section, the configurations and the code don't have any authentication operations. However, connecting to Azure service requires authentication. To complete the authentication, you need to use Azure Identity. Spring Cloud Azure uses `DefaultAzureCredential`, which Azure Identity provides to help you get credentials without any code changes.
-
-`DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. For more information, see the [Default Azure credential](/azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential) section of [Authenticate Azure-hosted Java applications](/azure/developer/java/sdk/identity-azure-hosted-auth).
--
-### Authenticate using DefaultAzureCredential
--
-You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed in with in the previous step.
-
-### Application configuration file
-
-Configure the Azure Database for MySQL credentials in the *application.yml* configuration file in the *cosmos/spring-cloud-azure-starter-data-cosmos/spring-cloud-azure-data-cosmos-sample* directory. Replace the values of `${AZURE_COSMOS_ENDPOINT}` and `${COSMOS_DATABASE}`.
-
-```yaml
-spring:
- cloud:
- azure:
- cosmos:
- endpoint: ${AZURE_COSMOS_ENDPOINT}
- database: ${COSMOS_DATABASE}
-```
-
-After Spring Boot and Spring Data create the Azure Cosmos DB account, database, and container, they connect to the database and container for `delete`, `add`, and `find` operations.
-
-### [Password](#tab/password)
-
-### Application configuration file
-
-The following section shows how Spring Boot and Spring Data use configuration instead of code to establish an Azure Cosmos DB client and connect to Azure Cosmos DB resources. At application startup Spring Boot handles all of this boilerplate using the following settings in *application.yml*:
-
-```yaml
-spring:
- cloud:
- azure:
- cosmos:
- key: ${AZURE_COSMOS_KEY}
- endpoint: ${AZURE_COSMOS_ENDPOINT}
- database: ${COSMOS_DATABASE}
-```
-
-Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data does the following: (1) creates an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connects to the database and container. You're all set - no more resource management code!
---
-### Java source
-
-Spring Data provides a simple, clean, standardized, and platform-independent interface for operating on datastores, as shown in the following examples. These CRUD and query examples enable you to manipulate Azure Cosmos DB documents by using Spring Data Azure Cosmos DB. These examples build on the Spring Data GitHub sample linked to earlier in this article.
-
-* Item creation and updates by using the `save` method.
-
- ```java
- // Save the User class to Azure Cosmos DB database.
- final Mono<User> saveUserMono = repository.save(testUser);
- ```
-
-* Point-reads using the derived query method defined in the repository. The `findById` performs point-reads for `repository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` field:
-
- ```java
- // Nothing happens until we subscribe to these Monos.
- // findById will not return the user as user is not present.
- final Mono<User> findByIdMono = repository.findById(testUser.getId());
- final User findByIdUser = findByIdMono.block();
- Assert.isNull(findByIdUser, "User must be null");
- ```
-
-* Item deletes using `deleteAll`:
-
- ```java
- repository.deleteAll().block();
- LOGGER.info("Deleted all data in container.");
- ```
-
-* Derived query based on repository method name. Spring Data implements the `repository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field. You can't implement this query as a point-read.
-
- ```java
- final Flux<User> firstNameUserFlux = repository.findByFirstName("testFirstName");
- ```
-
-## Run the app
-
-Now go back to the Azure portal to get your connection string information. Then, use the following steps to launch the app with your endpoint information so your app can communicate with your hosted database.
-
-1. In the Git terminal window, `cd` to the sample code folder.
-
- ```bash
- cd azure-spring-boot-samples/cosmos/spring-cloud-azure-starter-data-cosmos/spring-cloud-azure-data-cosmos-sample
- ```
-
-1. In the Git terminal window, use the following command to install the required Spring Data Azure Cosmos DB packages.
-
- ```bash
- mvn clean package
- ```
-
-1. In the Git terminal window, use the following command to start the Spring Data Azure Cosmos DB application:
-
- ```bash
- mvn spring-boot:run
- ```
-
-1. The app loads *application.yml* and connects the resources in your Azure Cosmos DB account.
-1. The app performs point CRUD operations described previously.
-1. The app performs a derived query.
-1. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account and create a document database and container using the Data Explorer. You then ran a Spring Data app to do the same thing programmatically. You can now import more data into your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-
-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
Title: "Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data"
-description: Use a Java code sample from GitHub to learn how to build an app to connect to and query Azure Cosmos DB for NoSQL.
+ Title: Quickstart - Java client library
+
+description: Deploy a Java Spring Web application that uses the client library to interact with Azure Cosmos DB for NoSQL data in this quickstart.
++ ms.devlang: java- Previously updated : 03/16/2023---+ Last updated : 01/08/2024
+zone_pivot_groups: azure-cosmos-db-quickstart-env
+# CustomerIntent: As a developer, I want to learn the basics of the Java library so that I can build applications with Azure Cosmos DB for NoSQL.
-# Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
->
+# Quickstart: Azure Cosmos DB for NoSQL library for Java
-This quickstart guide explains how to build a Java app to manage an Azure Cosmos DB for NoSQL account. You create the Java app using the SQL Java SDK, and add resources to your Azure Cosmos DB account by using the Java application.
-First, create an Azure Cosmos DB for NoSQL account using the Azure portal. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. You can [try Azure Cosmos DB account](https://aka.ms/trycosmosdb) for free without a credit card or an Azure subscription.
-> [!IMPORTANT]
-> This quickstart is for Azure Cosmos DB Java SDK v4 only. For more information, see the [release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), [performance tips](performance-tips-java-sdk-v4.md), and [troubleshooting guide](troubleshoot-java-sdk-v4.md). If you currently use an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+Get started with the Azure Cosmos DB for NoSQL client library for Java to query data in your containers and perform common operations on individual items. Follow these steps to deploy a minimal solution to your environment using the Azure Developer CLI.
-> [!TIP]
-> If you work with Azure Cosmos DB resources in a Spring application, consider using [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Cosmos DB, see [Access data with Azure Cosmos DB NoSQL API](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db).
+[API reference documentation](/java/api/overview/azure/cosmos-readme) | [Library source code](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos) | [Package (Maven)](https://central.sonatype.com/artifact/com.azure/azure-cosmos) | [Azure Developer CLI](/azure/developer/azure-developer-cli/overview)
## Prerequisites -- An Azure account with an active subscription. If you don't have an Azure subscription, you can [try Azure Cosmos DB free](../try-free.md) with no credit card required.-- [Java Development Kit (JDK) 8](https://www.oracle.com/java/technologies/javase/8u-relnotes.html). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.-- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.-
-## Introductory notes
-
-**The structure of an Azure Cosmos DB account:** For any API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the following diagram:
--
-For more information, see [Databases, containers, and items in Azure Cosmos DB](../resource-model.md).
-
-A few important properties are defined at the level of the container, including *provisioned throughput* and *partition key*. The provisioned throughput is measured in request units (RUs), which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. To learn more about throughput provisioning, see [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md).
-
-As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key that maps each document to a partition. Partitions are managed such that each partition is assigned a roughly equal slice out of the range of partition key values. Therefore, you're advised to choose a partition key that's relatively random or evenly distributed. Otherwise, some partitions see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*). To learn more, see [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
-
-## Create a database account
-
-Before you can create a document database, you need to create an API for NoSQL account with Azure Cosmos DB.
--
-## Add a container
--
-<a id="add-sample-data"></a>
-
-## Add sample data
--
-## Query your data
--
-## Clone the sample application
-
-Now let's switch to working with code. Clone an API for NoSQL app from GitHub, set the connection string, and run it. You can see how easy it is to work with data programmatically.
-
-Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
-```bash
-git clone https://github.com/Azure-Samples/azure-cosmos-java-getting-started.git
-```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app](#run-the-app).
-
-## [Passwordless Sync API (Recommended)](#tab/passwordlesssync)
---
-## Authenticate using DefaultAzureCredential
--
-You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed into in the previous step.
-
-### Manage database resources using the synchronous (sync) API
-
-* `CosmosClient` initialization: The `CosmosClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute requests against the service.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=CreatePasswordlessSyncClient)]
-
-* Use the [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
-
- ```azurecli-interactive
- # Create a SQL API database
- az cosmosdb sql database create \
- --account-name msdocs-cosmos-nosql \
- --resource-group msdocs \
- --name AzureSampleFamilyDB
- ```
-
- ```azurecli-interactive
- # Create a SQL API container
- az cosmosdb sql container create \
- --account-name msdocs-cosmos-nosql \
- --resource-group msdocs \
- --database-name AzureSampleFamilyDB \
- --name FamilyContainer \
- --partition-key-path '/lastName'
- ```
-* Item creation by using the `createItem` method.
+## Setting up
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=CreateItem)]
+Deploy this project's development container to your environment. Then, use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and deploy a containerized sample application. The sample application uses the client library to manage, create, read, and query sample data.
-* Point reads are performed using `readItem` method.
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=ReadItem)]
+[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/azure-samples/cosmos-db-nosql-java-quickstart?template=false&quickstart=1&azure-portal=true)
-* SQL queries over JSON are performed using the `queryItems` method.
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=QueryItems)]
-## Run the app
+[![Open in Dev Container](https://img.shields.io/static/v1?style=for-the-badge&label=Dev+Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/azure-samples/cosmos-db-nosql-java-quickstart)
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
-1. In the git terminal window, `cd` to the sample code folder.
- ```bash
- cd azure-cosmos-java-getting-started
- ```
+### Install the client library
-2. In the git terminal window, use the following command to install the required Java packages.
+The client library is available through Maven, as the `azure-spring-data-cosmos` package.
- ```bash
- mvn package
- ```
+1. Navigate to the `/src/web` folder and open the **pom.xml** file.
-3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync-passwordless` or `async-passwordless`, depending on which sample code you'd like to run. Replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
+1. If it doesn't already exist, add an entry for the `azure-spring-data-cosmos` package.
- ```bash
- mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-spring-data-cosmos</artifactId>
+ </dependency>
```
- The terminal window displays a notification that the `FamilyDB` database was created.
-
-4. The app references the database and container you created via Azure CLI earlier.
-
-5. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
-
-6. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
-
-7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account so that you don't incur charges.
-
-## [Passwordless Async API](#tab/passwordlessasync)
--
+1. Also, add another dependency for the `azure-identity` package if it doesn't already exist.
-## Authenticate using DefaultAzureCredential
--
-You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the [azure-identity dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed-in with in the previous step.
-
-### Managing database resources using the asynchronous (async) API
-
-* Async API calls return immediately, without waiting for a response from the server. The following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
-
-* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute asynchronous requests against the service.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=CreatePasswordlessAsyncClient)]
-
-* Use the [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
-
- ```azurecli-interactive
- # Create a SQL API database
- az cosmosdb sql database create \
- --account-name msdocs-cosmos-nosql \
- --resource-group msdocs \
- --name AzureSampleFamilyDB
- ```
-
- ```azurecli-interactive
- # Create a SQL API container
- az cosmosdb sql container create \
- --account-name msdocs-cosmos-nosql \
- --resource-group msdocs \
- --database-name AzureSampleFamilyDB \
- --name FamilyContainer \
- --partition-key-path '/lastName'
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ </dependency>
```
-* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream that issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program doesn't terminate during item creation. **The proper asynchronous programming practice is not to block on async calls. In realistic use cases, requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
+## Object model
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=CreateItem)]
-
-* As with the sync API, point reads are performed using `readItem` method.
+| Name | Description |
+| | |
+| `EnableCosmosRepositories` | This type is a method decorator used to configure a repository to access Azure Cosmos DB for NoSQL. |
+| `CosmosRepository` | This class is the primary client class and is used to manage data within a container. |
+| `CosmosClientBuilder` | This class is a factory used to create a client used by the repository. |
+| `Query` | This type is a method decorator used to specify the query that the repository executes. |
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=ReadItem)]
+## Code examples
-* As with the sync API, SQL queries over JSON are performed using the `queryItems` method.
+- [Authenticate the client](#authenticate-the-client)
+- [Get a database](#get-a-database)
+- [Get a container](#get-a-container)
+- [Create an item](#create-an-item)
+- [Get an item](#read-an-item)
+- [Query items](#query-items)
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=QueryItems)]
-## Run the app
+### Authenticate the client
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
-1. In the git terminal window, `cd` to the sample code folder.
+First, this sample creates a new class that inherits from `AbstractCosmosConfiguration` to configure the connection to Azure Cosmos DB for NoSQL.
- ```bash
- cd azure-cosmos-java-getting-started
- ```
-2. In the git terminal window, use the following command to install the required Java packages.
+Within the configuration class, this sample creates a new instance of the `CosmosClientBuilder` class and configures authentication using a `DefaultAzureCredential` instance.
- ```bash
- mvn package
- ```
-
-3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync-passwordless` or `async-passwordless` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
-
- ```bash
- mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
- ```
-
- The terminal window displays a notification that the `AzureSampleFamilyDB` database was created.
-
-4. The app references the database and container you created via Azure CLI earlier.
-
-5. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
-
-6. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
-
-7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account so that you don't incur charges.
-
-## [Sync API](#tab/sync)
-### Managing database resources using the synchronous (sync) API
+### Get a database
-* `CosmosClient` initialization. The `CosmosClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute requests against the service.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateSyncClient)]
+In the configuration class, the sample implements a method to return the name of the existing database named *`cosmicworks`*.
-* `CosmosDatabase` creation.
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
+### Get a container
-* `CosmosContainer` creation.
+Use the `Container` method decorator to configure a class to represent items in a container. Author the class to include all of the members you want to serialize into JSON. In this example, the type has a unique identifier, and fields for category, name, quantity, price, and clearance.
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
-* Item creation by using the `createItem` method.
+### Create an item
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateItem)]
-
-* Point reads are performed using `readItem` method.
+Create an item in the container using `repository.save`.
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=ReadItem)]
-* SQL queries over JSON are performed using the `queryItems` method.
+### Read an item
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=QueryItems)]
+Perform a point read operation by using both the unique identifier (`id`) and partition key fields. Use `repository.findById` to efficiently retrieve the specific item.
-## Run the app
-
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
-
-1. In the git terminal window, `cd` to the sample code folder.
-
- ```bash
- cd azure-cosmos-java-getting-started
- ```
-
-2. In the git terminal window, use the following command to install the required Java packages.
-
- ```bash
- mvn package
- ```
-
-3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
-
- ```bash
- mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
- ```
- The terminal window displays a notification that the FamilyDB database was created.
+### Query items
-4. The app creates a database with the name `AzureSampleFamilyDB`.
-
-5. The app creates a container with the name `FamilyContainer`.
-
-6. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
-
-7. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
-
-8. The app doesn't delete the created resources. Return to the Azure portal to [clean up the resources](#clean-up-resources) from your account so you don't incur charges.
-
-## [Async API](#tab/async)
-
-### Managing database resources using the asynchronous (async) API
-
-* Async API calls return immediately, without waiting for a response from the server. The following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
-
-* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute asynchronous requests against the service.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateAsyncClient)]
-
-* `CosmosAsyncDatabase` creation.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
-
-* `CosmosAsyncContainer` creation.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
-
-* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream that issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program doesn't terminate during item creation. **The proper asynchronous programming practice is not to block on async calls. In realistic use cases, requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateItem)]
-
-* As with the sync API, point reads are performed by using `readItem` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=ReadItem)]
-
-* As with the sync API, SQL queries over JSON are performed by using the `queryItems` method.
-
- [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=QueryItems)]
-
-## Run the app
-
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
-
-1. In the git terminal window, `cd` to the sample code folder.
-
- ```bash
- cd azure-cosmos-java-getting-started
- ```
-
-2. In the git terminal window, use the following command to install the required Java packages.
-
- ```bash
- mvn package
- ```
-
-3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
-
- ```bash
- mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
-
- ```
-
- The terminal window displays a notification that the `FamilyDB` database was created.
-
-4. The app creates a database with the name `AzureSampleFamilyDB`.
-
-5. The app creates a container with the name `FamilyContainer`.
-
-6. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
-
-7. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
-
-8. The app doesn't delete the created resources. Return to the Azure portal to [clean up the resources](#clean-up-resources) from your account so you don't incur charges.
----
-## Use Throughput Control
-
-Having throughput control helps to isolate the performance needs of applications running against a container, by limiting the amount of [request units](../request-units.md) that can be consumed by a given Java SDK client.
-
-There are several advanced scenarios that benefit from client-side throughput control:
--- **Different operations and tasks have different priorities** - there can be a need to prevent normal transactions from being throttled due to data ingestion or copy activities. Some operations and/or tasks aren't sensitive to latency, and are more tolerant to being throttled than others.--- **Provide fairness/isolation to different end users/tenants** - An application will usually have many end users. Some users may send too many requests, which consume all available throughput, causing others to get throttled.--- **Load balancing of throughput between different Azure Cosmos DB clients** - in some use cases, it's important to make sure all the clients get a fair (equal) share of the throughput-
-> [!WARNING]
-> Please note that throughput control is not yet supported for gateway mode.
-> Currently, for [serverless Azure Cosmos DB accounts](../serverless.md), attempting to use `targetThroughputThreshold` to define a percentage will result in failure. You can only provide an absolute value for target throughput/RU using `targetThroughput`.
-
-### Global throughput control
-
-Global throughput control in the Java SDK is configured by first creating a container that will define throughput control metadata. This container must have a partition key of `groupId`, and `ttl` enabled. Assuming you already have objects for client, database, and container as defined in the examples above, you can create this container as below. Here we name the container `ThroughputControl`:
-
-## [Sync API](#tab/sync-throughput)
-
-```java
- CosmosContainerProperties throughputContainerProperties = new CosmosContainerProperties("ThroughputControl", "/groupId").setDefaultTimeToLiveInSeconds(-1);
- ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
- database.createContainerIfNotExists(throughputContainerProperties, throughputProperties);
-```
-
-## [Async API](#tab/async-throughput)
-
-```java
- CosmosContainerProperties throughputContainerProperties = new CosmosContainerProperties("ThroughputControl", "/groupId").setDefaultTimeToLiveInSeconds(-1);
- ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);
- database.createContainerIfNotExists(throughputContainerProperties, throughputProperties).block();
-```
--
-> [!NOTE]
-> The throughput control container must be created with a partition key `/groupId` and must have `ttl` value set, or throughput control will not function correctly.
-
-Then, to enable the container object used by the current client to use a shared global control group, we need to create two sets of config. The first is to define the control `groupName`, and the `targetThroughputThreshold` or `targetThroughput` for that group. If the group does not already exist, an entry for it will be created in the throughput control container:
-
-```java
- ThroughputControlGroupConfig groupConfig =
- new ThroughputControlGroupConfigBuilder()
- .groupName("globalControlGroup")
- .targetThroughputThreshold(0.25)
- .targetThroughput(100)
- .build();
-```
-
-> [!NOTE]
-> In the above, we define a `targetThroughput` value of `100`, meaning that only a maximum of 100 RUs of the container's provisioned throughput can be used by all clients consuming the throughput control group, before the SDK will attempt to rate limit clients. You can also define `targetThroughputThreshold` to provide a percentage of the container's throughput as the threshold instead (the example above defines a threshold of 25%). Defining a value for both with not cause an error, but the SDK will apply the one with the lower value. For example, if the container in the above example has 1000 RUs provisioned, the value of `targetThroughputThreshold(0.25)` will be 250 RUs, so the lower value of `targetThroughput(100)` will be used as the threshold.
-
-> [!IMPORTANT]
-> If you reference a `groupName` that already exists, but define `targetThroughputThreshold` or `targetThroughput` values to be different than what was originally defined for the group, this will be treated as a different group (even though it has the same name). To make sure all clients use the same group, make sure they all have the same settings for both `groupName` **and** `targetThroughputThreshold` (or `targetThroughput`). You also need to restart all applications after making any such changes, to ensure they all consume the new threshold or target throughput properly.
-
-The second config you need to create will reference the throughput container you created earlier, and define some behaviours for it using two parameters:
--- Use `setControlItemRenewInterval` to determine how fast throughput will be re-balanced between clients. At each renewal interval, each client will update it's own throughput usage in a client item record stored in the throughput control container. It will also read all the throughput usage of all other active clients, and adjust the throughput that should be assigned to itself. The minimum value that can be set is 5 seconds (there is no maximum value). -- Use `setControlItemExpireInterval` to determine when a dormant client should be considered offline and no longer part of any throughput control group. Upon expiry, the client item in the throughput container will be removed, and the data will no longer be used for re-balancing between clients. The value of this must be at least (2 * `setControlItemRenewInterval` + 1). For example, if the value of `setControlItemRenewInterval` is 5 seconds, the value of `setControlItemExpireInterval` must be at least 11 seconds.-
-```java
- GlobalThroughputControlConfig globalControlConfig =
- this.client.createGlobalThroughputControlConfigBuilder("ThroughputControlDatabase", "ThroughputControl")
- .setControlItemRenewInterval(Duration.ofSeconds(5))
- .setControlItemExpireInterval(Duration.ofSeconds(11))
- .build();
-```
-
-Now we're ready to enable global throughput control for this container object. Other Cosmos clients running in other JVMs can share the same throughput control group, and as long as they are referencing the same throughput control metadata container, and reference the same throughput control group name.
-
-```java
- container.enableGlobalThroughputControlGroup(groupConfig, globalControlConfig);
-```
-
-Finally, you must set the group name in request options for the given operation:
-
-```java
- CosmosItemRequestOptions options = new CosmosItemRequestOptions();
- options.setThroughputControlGroupName("globalControlGroup");
- container.createItem(family, options).block();
-```
-
-For [bulk operations](bulk-executor-java.md), this would look like the below:
-
-```java
- Flux<Family> families = Flux.range(0, 1000).map(i -> {
- Family family = new Family();
- family.setId(UUID.randomUUID().toString());
- family.setLastName("Andersen-" + i);
- return family;
- });
- CosmosBulkExecutionOptions bulkExecutionOptions = new CosmosBulkExecutionOptions();
- bulkExecutionOptions.setThroughputControlGroupName("globalControlGroup")
- Flux<CosmosItemOperation> cosmosItemOperations = families.map(family -> CosmosBulkOperations.getCreateItemOperation(family, new PartitionKey(family.getLastName())));
- container.executeBulkOperations(cosmosItemOperations, bulkExecutionOptions).blockLast();
-```
-
-> [!NOTE]
-> Throughput control does not do RU pre-calculation of each operation. Instead, it tracks the RU usages *after* the operation based on the response header. As such, throughput control is based on an approximation - and **does not guarantee** that amount of throughput will be available for the group at any given time. This means that if the configured RU is so low that a single operation can use it all, then throughput control cannot avoid the RU exceeding the configured limit. Therefore, throughput control works best when the configured limit is higher than any single operation that can be executed by a client in the given control group. With that in mind, when reading via query or change feed, you should configure the [page size](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/a9460846d144fb87ae4e3d2168f63a9f2201c5ed/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L255) to be a modest amount, so that client throughput control can be re-calculated with higher frequency, and therefore reflected more accurately at any given time. However, when using throughput control for a write-job using bulk, the number of documents executed in a single request will automatically be tuned based on the throttling rate to allow the throughput control to kick-in as early as possible.
-
-### Local throughput control
-
-You can also use local throughput control, without defining a shared control group that multiple clients will use. However, with this approach, each client will be unaware of how much throughput other clients are consuming from the total available throughput in the container, while global throughput control attempts to load balance the consumption of each client.
-
-```java
- ThroughputControlGroupConfig groupConfig =
- new ThroughputControlGroupConfigBuilder()
- .groupName("localControlGroup")
- .targetThroughputThreshold(0.1)
- .build();
- container.enableLocalThroughputControlGroup(groupConfig);
-```
-
-As with global throughput control, remember to set the group name in request options for the given operation:
-
-```java
- CosmosItemRequestOptions options = new CosmosItemRequestOptions();
- options.setThroughputControlGroupName("localControlGroup");
- container.createItem(family, options).block();
-```
+Perform a query over multiple items in a container by defining a query in the repository's interface. This sample uses the `Query` method decorator to define a method that executes this parameterized query:
-For [bulk operations](bulk-executor-java.md), this would look like the below:
-
-```java
- Flux<Family> families = Flux.range(0, 1000).map(i -> {
- Family family = new Family();
- family.setId(UUID.randomUUID().toString());
- family.setLastName("Andersen-" + i);
- return family;
- });
- CosmosBulkExecutionOptions bulkExecutionOptions = new CosmosBulkExecutionOptions();
- bulkExecutionOptions.setThroughputControlGroupName("localControlGroup")
- Flux<CosmosItemOperation> cosmosItemOperations = families.map(family -> CosmosBulkOperations.getCreateItemOperation(family, new PartitionKey(family.getLastName())));
- container.executeBulkOperations(cosmosItemOperations, bulkExecutionOptions).blockLast();
+```nosql
+SELECT * FROM products p WHERE p.category = @category
```
-## Review SLAs in the Azure portal
+Fetch all of the results of the query using `repository.getItemsByCategory`. Loop through the results of the query.
-## Clean up resources
+## Related content
-## Next steps
+- [.NET Quickstart](quickstart-dotnet.md)
+- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [java Quickstart](quickstart-java.md)
+- [Go Quickstart](quickstart-go.md)
-In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+## Next step
-Are you capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating RUs using vCores or vCPUs](../convert-vcore-to-request-unit.md).
-* If you know typical request rates for your current database workload, learn how to [estimate RUs using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+> [!div class="nextstepaction"]
+> [Tutorial: Build a Java web app](tutorial-java-web-app.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
Title: Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
-description: Learn how to build a Node.js app to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
+ Title: Quickstart - Node.js client library
+
+description: Deploy a Node.js Express web application that uses the client library to interact with Azure Cosmos DB for NoSQL data in this quickstart.
+ ms.devlang: javascript- Previously updated : 05/08/2023-+ Last updated : 01/08/2024
+zone_pivot_groups: azure-cosmos-db-quickstart-env
+# CustomerIntent: As a developer, I want to learn the basics of the Node.js library so that I can build applications with Azure Cosmos DB for NoSQL.
-# Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
+# Quickstart: Azure Cosmos DB for NoSQL library for Node.js
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
+Get started with the Azure Cosmos DB for NoSQL client library for Node.js to query data in your containers and perform common operations on individual items. Follow these steps to deploy a minimal solution to your environment using the Azure Developer CLI.
-> [!NOTE]
-> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-sql-api-javascript-samples) are available on GitHub as a Node.js project.
+[API reference documentation](/javascript/api/overview/azure/cosmos-readme) | [Library source code](https://github.com/azure/azure-sdk-for-js/tree/main/sdk/cosmosdb/cosmos) | [Package (npm)](https://www.npmjs.com/package/@azure/cosmos) | [Azure Developer CLI](/azure/developer/azure-developer-cli/overview)
## Prerequisites -- An Azure account with an active subscription.
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-- [Node.js LTS](https://nodejs.org/en/download/)-- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)-
-### Prerequisite check
--- In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.-- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed. ## Setting up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for JavaScript to manage resources.
-
-### <a id="create-account"></a>Create an Azure Cosmos DB account
-
-> [!TIP]
-> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new JavaScript project](#create-a-new-javascript-project) section.
-
+Deploy this project's development container to your environment. Then, use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and deploy a containerized sample application. The sample application uses the client library to manage, create, read, and query sample data.
-### Create a new JavaScript project
+[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/azure-samples/cosmos-db-nosql-nodejs-quickstart?template=false&quickstart=1&azure-portal=true)
-1. Create a new Node.js application in an empty folder using your preferred terminal.
- ```bash
- npm init -y
- ```
-
-2. Edit the `package.json` file to use ES6 modules by adding the `"type": "module",` entry. This setting allows your code to use modern async/await syntax.
- :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/package.json" highlight="6":::
+[![Open in Dev Container](https://img.shields.io/static/v1?style=for-the-badge&label=Dev+Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/azure-samples/cosmos-db-nosql-nodejs-quickstart)
-### Install packages
-### [Passwordless (Recommended)](#tab/passwordless)
-1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) and [@azure/identity](https://www.npmjs.com/package/@azure/identity) npm packages to the Node.js project.
+### Install the client library
- ```bash
- npm install @azure/cosmos
- npm install @azure/identity
- ```
+The client library is available through the Node Package Manager, as the `@azure/cosmos` package.
-1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file.
+1. Open a terminal and navigate to the `/src` folder.
```bash
- npm install dotenv
+ cd ./src
```
-### [Connection String](#tab/connection-string)
-
-1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) npm package to the Node.js project.
+1. If not already installed, install the `@azure/cosmos` package using `npm install`.
```bash
- npm install @azure/cosmos
+ npm install --save @azure/cosmos
```
-1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file.
+1. Also, install the `@azure/identity` package if not already installed.
```bash
- npm install dotenv
+ npm install --save @azure/identity
``` ---
-### Configure environment variables
--
+1. Open and review the **src/package.json** file to validate that the `azure-cosmos` and `azure-identity` entries both exist.
## Object model -
-You'll use the following JavaScript classes to interact with these resources:
--- [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.-- [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.-- [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.-- [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.-- [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.-- [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
+| Name | Description |
+| | |
+| [`CosmosClient`](/javascript/api/@azure/cosmos/cosmosclient) | This class is the primary client class and is used to manage account-wide metadata or databases. |
+| [`Database`](/javascript/api/@azure/cosmos/database) | This class represents a database within the account. |
+| [`Container`](/javascript/api/@azure/cosmos/container) | This class is primarily used to perform read, update, and delete operations on either the container or the items stored within the container. |
+| [`PartitionKey`](/javascript/api/@azure/cosmos/partitionkey) | This class represents a logical partition key. This class is required for many common operations and queries. |
+| [`SqlQuerySpec`](/javascript/api/@azure/cosmos/sqlqueryspec) | This interface represents a SQL query and any query parameters. |
## Code examples - [Authenticate the client](#authenticate-the-client)-- [Create a database](#create-a-database)-- [Create a container](#create-a-container)
+- [Get a database](#get-a-database)
+- [Get a container](#get-a-container)
- [Create an item](#create-an-item)-- [Get an item](#get-an-item)
+- [Get an item](#read-an-item)
- [Query items](#query-items)
-The sample code described in this article creates a database named ``cosmicworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-For this sample code, the container will use the category as a logical partition key.
### Authenticate the client -
-## [Passwordless (Recommended)](#tab/passwordless)
+This sample creates a new instance of the `CosmosClient` type and authenticates using a `DefaultAzureCredential` instance.
-#### Authenticate using DefaultAzureCredential
+### Get a database
+Use `client.database` to retrieve the existing database named *`cosmicworks`*.
-From the project directory, open the *index.js* file. In your editor, add npm packages to work with Cosmos DB and authenticate to Azure. You'll authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` from the [`@azure/identity`](https://www.npmjs.com/package/@azure/identity) package. `DefaultAzureCredential` will automatically discover and use the account you signed-in with previously.
--
-Create an environment variable that specifies your Cosmos DB endpoint.
--
-Create constants for the database and container names.
---
-Create a new client instance of the [`CosmosClient`](/javascript/api/@azure/cosmos/cosmosclient) class constructor with the `DefaultAzureCredential` object and the endpoint.
--
-## [Connection String](#tab/connection-string)
-
-From the project directory, open the *index.js* file. In your editor, import [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) package to work with Cosmos DB and authenticate to Azure using the endpoint and key.
--
-Create environment variables that specify your Cosmos DB endpoint and key.
--
-Create constants for the database and container names.
--
-Create a new client instance of the [`CosmosClient`](/javascript/api/@azure/cosmos/cosmosclient) class constructor with the endpoint and key.
---
-### <a id="create-and-query-the-database"></a>
-### Create a database
+### Get a container
-## [Passwordless (Recommended)](#tab/passwordless)
+Retrieve the existing *`products`* container using `database.container`.
-
-The `@azure/cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations, such as creating and deleting databases, you must use RBAC through one of the following options:
-
-> - [Azure CLI scripts](manage-with-cli.md)
-> - [Azure PowerShell scripts](manage-with-powershell.md)
-> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
-> - [Azure Resource Manager JavaScript client library](https://www.npmjs.com/package/@azure/arm-cosmosdb)
-
-The Azure CLI approach is used in for this quickstart and passwordless access. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) command to create a Cosmos DB for NoSQL database.
-
-```azurecli
-# Create a SQL API database `
-az cosmosdb sql database create `
- --account-name <cosmos-db-account-name> `
- --resource-group <resource-group-name> `
- --name cosmicworks
-```
-
-The command line to create a database is for PowerShell, shown on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line.
-
-## [Connection String](#tab/connection-string)
-
-Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method returns a reference to the existing or newly created database.
----
-### Create a container
-
-## [Passwordless (Recommended)](#tab/passwordless)
-
-The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations such as creating and deleting databases you must use RBAC through one of the following options:
-
-> - [Azure CLI scripts](manage-with-cli.md)
-> - [Azure PowerShell scripts](manage-with-powershell.md)
-> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
-> - [Azure Resource Manager JavaScript client library](https://www.npmjs.com/package/@azure/arm-cosmosdb)
-
-The Azure CLI approach is used in this example. Use the [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) command to create a Cosmos DB container.
-
-```azurecli
-# Create a SQL API container
-az cosmosdb sql container create `
- --account-name <cosmos-db-account-name> `
- --resource-group <resource-group-name> `
- --database-name cosmicworks `
- --partition-key-path "/categoryId" `
- --name products
-```
-
-The command line to create a container is for PowerShell, on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line. For Bash, you'll also need to add `MSYS_NO_PATHCONV=1` before the command so that Bash deals with the partition key parameter correctly.
-
-After the resources have been created, use classes from the `Microsoft.Azure.Cosmos` client libraries to connect to and query the database.
-
-## [Connection String](#tab/connection-string)
-
-Add the following code to create a container with the [``Database.Containers.createContainerIfNotExistsAsync``](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-createifnotexists) method. The method returns a reference to the container.
--- ### Create an item
-Add the following code to provide your data set. Each _product_ has a unique ID, name, category id (used as partition key) and other fields.
--
-Create a few items in the container by calling [``Container.Items.create``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-create) in a loop.
+Build a new object with all of the members you want to serialize into JSON. In this example, the type has a unique identifier, and fields for category, name, quantity, price, and sale. Create an item in the container using `container.items.upsert`. This method "upserts" the item effectively replacing the item if it already exists.
-### Get an item
+### Read an item
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.item().read``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) passing in both values to return an item.
+Perform a point read operation by using both the unique identifier (`id`) and partition key fields. Use `container.item` to get a pointer to an item and `item.read` to efficiently retrieve the specific item.
-The partition key is specific to a container. In this Contoso Products container, the category id, `categoryId`, is used as the partition key.
- ### Query items
-Add the following code to query for all items that match a specific filter. Create a [parameterized query expression](/javascript/api/@azure/cosmos/sqlqueryspec) then call the [``Container.Items.query``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method. This method returns a [``QueryIterator``](/javascript/api/@azure/cosmos/queryiterator) that manages the pages of results. Then, use a combination of ``while`` and ``for`` loops to [``fetchNext``](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext) page of results as a [``FeedResponse``](/javascript/api/@azure/cosmos/feedresponse) and then iterate over the individual data objects.
-
-The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`.
+Perform a query over multiple items in a container using `container.items.query`. Find all items within a specified category using this parameterized query:
-
-If you want to use this data returned from the FeedResponse as an _item_, you need to create an [``Item``](/javascript/api/@azure/cosmos/item), using the [``Container.Items.read``](#get-an-item) method.
-
-### Delete an item
-
-Add the following code to delete an item you need to use the ID and partition key to get the item, then delete it. This example uses the [``Container.Item.delete``](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-delete) method to delete the item.
--
-## Run the code
-
-This app creates an Azure Cosmos DB SQL API database and container. The example then creates items and then reads one item back. Finally, the example issues a query that should only return items matching a specific category. With each step, the example outputs metadata to the console about the steps it has performed.
-
-To run the app, use a terminal to navigate to the application directory and run the application.
-
-```bash
-node index.js
+```nosql
+SELECT * FROM products p WHERE p.category = @category
```
-The output of the app should be similar to this example:
-
-```output
-contoso_1663276732626 database ready
-products_1663276732626 container ready
-'Touring-1000 Blue, 50' inserted
-'Touring-1000 Blue, 46' inserted
-'Mountain-200 Black, 42' inserted
-Touring-1000 Blue, 50 read
-08225A9E-F2B3-4FA3-AB08-8C70ADD6C3C2: Touring-1000 Blue, 50, BK-T79U-50
-2C981511-AC73-4A65-9DA3-A0577E386394: Touring-1000 Blue, 46, BK-T79U-46
-0F124781-C991-48A9-ACF2-249771D44029 Item deleted
-```
+Fetch all of the results of the query using `query.fetchAll`. Loop through the results of the query.
-## Clean up resources
+## Related content
-## Next steps
+- [.NET Quickstart](quickstart-dotnet.md)
+- [Java Quickstart](quickstart-java.md)
+- [Python Quickstart](quickstart-python.md)
+- [Go Quickstart](quickstart-go.md)
-In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
+## Next step
> [!div class="nextstepaction"]
-> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
+> [Tutorial: Build a Node.js web app](tutorial-nodejs-web-app.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
Title: Quickstart - Azure Cosmos DB for NoSQL client library for Python
-description: Learn how to build a Python app to manage Azure Cosmos DB for NoSQL account resources and data.
+ Title: Quickstart - Python client library
+
+description: Deploy a Python Flask web application that uses the client library to interact with Azure Cosmos DB for NoSQL data in this quickstart.
+ ms.devlang: python- Previously updated : 1/17/2023-+ Last updated : 01/08/2024
+zone_pivot_groups: azure-cosmos-db-quickstart-env
+# CustomerIntent: As a developer, I want to learn the basics of the Python library so that I can build applications with Azure Cosmos DB for NoSQL.
-# Quickstart: Azure Cosmos DB for NoSQL client library for Python
+# Quickstart: Azure Cosmos DB for NoSQL library for Python
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-Get started with the Azure Cosmos DB client library for Python to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
+Get started with the Azure Cosmos DB for NoSQL client library for Python to query data in your containers and perform common operations on individual items. Follow these steps to deploy a minimal solution to your environment using the Azure Developer CLI.
-> [!NOTE]
-> The [example code snippets](https://github.com/azure-samples/cosmos-db-nosql-python-samples) are available on GitHub as a Python project.
-
-[API reference documentation](/python/api/azure-cosmos/azure.cosmos) | [Library source code](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos) | [Package (PyPI)](https://pypi.org/project/azure-cosmos) | [Samples](samples-python.md)
+[API reference documentation](/python/api/overview/azure/cosmos-readme) | [Library source code](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos) | [Package (PyPI)](https://pypi.org/project/azure-cosmos) | [Azure Developer CLI](/azure/developer/azure-developer-cli/overview)
## Prerequisites -- An Azure account with an active subscription.
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-- [Python 3.7 or later](https://www.python.org/downloads/)
- - Ensure the `python` executable is in your `PATH`.
-- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)-
-### Prerequisite check
--- In a command shell, run `python --version` to check that the version is 3.7 or later.-- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed. ## Setting up
-This section walks you through creating an Azure Cosmos DB account and setting up a project that uses the Azure Cosmos DB for NoSQL client library for Python to manage resources.
+Deploy this project's development container to your environment. Then, use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and deploy a containerized sample application. The sample application uses the client library to manage, create, read, and query sample data.
-### Create an Azure Cosmos DB account
-> [!TIP]
-> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new Python app](#create-a-new-python-app) section.
+[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/azure-samples/cosmos-db-nosql-python-quickstart?template=false&quickstart=1&azure-portal=true)
-### Create a new Python app
-Create a new Python code file (*app.py*) in an empty folder using your preferred integrated development environment (IDE).
+[![Open in Dev Container](https://img.shields.io/static/v1?style=for-the-badge&label=Dev+Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/azure-samples/cosmos-db-nosql-python-quickstart)
-### Install packages
-Use the `pip install` command to install packages you'll need in the quickstart.
-### [Passwordless (Recommended)](#tab/passwordless)
+### Install the client library
-Add the [`azure-cosmos`](https://pypi.org/project/azure-cosmos) and [`azure-identity`](https://pypi.org/project/azure-identity) PyPI packages to the Python app.
+The client library is available through the Python Package Index, as the `azure-cosmos` library.
-```bash
-pip install azure-cosmos
-pip install azure-identity
-```
-
-### [Connection String](#tab/connection-string)
+1. Open a terminal and navigate to the `/src` folder.
-Add the [`azure-cosmos`](https://pypi.org/project/azure-cosmos) PyPI package to the Python app.
+ ```bash
+ cd ./src
+ ```
-```bash
-pip install azure-cosmos
-```
+1. If not already installed, install the `azure-cosmos` package using `pip install`.
-
+ ```bash
+ pip install azure-cosmos
+ ```
+1. Also, install the `azure-identity` package if not already installed.
-### Configure environment variables
+ ```bash
+ pip install azure-identity
+ ```
+1. Open and review the **src/requirements.txt** file to validate that the `azure-cosmos` and `azure-identity` entries both exist.
## Object model -
-You'll use the following Python classes to interact with these resources:
--- [``CosmosClient``](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.-- [``DatabaseProxy``](/python/api/azure-cosmos/azure.cosmos.database.databaseproxy) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.-- [``ContainerProxy``](/python/api/azure-cosmos/azure.cosmos.container.containerproxy) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+| Name | Description |
+| | |
+| [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) | This class is the primary client class and is used to manage account-wide metadata or databases. |
+| [`DatabaseProxy`](/python/api/azure-cosmos/azure.cosmos.database.databaseproxy) | This class represents a database within the account. |
+| [`CotnainerProxy`](/python/api/azure-cosmos/azure.cosmos.container.containerproxy) | This class is primarily used to perform read, update, and delete operations on either the container or the items stored within the container. |
+| [`PartitionKey`](/python/api/azure-cosmos/azure.cosmos.partition_key.partitionkey) | This class represents a logical partition key. This class is required for many common operations and queries. |
## Code examples - [Authenticate the client](#authenticate-the-client)-- [Create a database](#create-a-database)-- [Create a container](#create-a-container)
+- [Get a database](#get-a-database)
+- [Get a container](#get-a-container)
- [Create an item](#create-an-item)-- [Get an item](#get-an-item)
+- [Get an item](#read-an-item)
- [Query items](#query-items)
-The sample code described in this article creates a database named ``cosmicworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-
-For this sample code, the container will use the category as a logical partition key.
### Authenticate the client -
-## [Passwordless (Recommended)](#tab/passwordless)
---
-#### Authenticate using DefaultAzureCredential
--
-From the project directory, open the *app.py* file. In your editor, add modules to work with Cosmos DB and authenticate to Azure. You'll authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` from the [`azure-identity`](https://pypi.org/project/azure-identity/) package. `DefaultAzureCredential` will automatically discover and use the account you signed-in with previously.
--
-Create an environment variable that specifies your Cosmos DB endpoint.
--
-Create constants for the database and container names.
--
-Create a new client instance using the [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) class constructor and the `DefaultAzureCredential` object.
--
-## [Connection String](#tab/connection-string)
-
-From the project directory, open the *app.py* file. In your editor, import the `os` and `json` modules. Then, import the `CosmosClient` and `PartitionKey` classes from the `azure.cosmos` module.
--
-Create constants for the `COSMOS_ENDPOINT` and `COSMOS_KEY` environment variables using `os.environ`.
--
-Create constants for the database and container names.
--
-Create a new client instance using the [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) class constructor and the two variables you created as parameters.
----
-### Create a database
-
-## [Passwordless (Recommended)](#tab/passwordless)
-
-The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations, such as creating and deleting databases, you must use RBAC through one of the following options:
-
-> - [Azure CLI scripts](manage-with-cli.md)
-> - [Azure PowerShell scripts](manage-with-powershell.md)
-> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
-> - [Azure Resource Manager Python client library](https://pypi.org/project/azure-mgmt-cosmosdb)
-
-The Azure CLI approach is used in for this quickstart and passwordless access. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) command to create a Cosmos DB for NoSQL database.
-
-```azurecli
-# Create a SQL API database `
-az cosmosdb sql database create `
- --account-name <cosmos-db-account-name> `
- --resource-group <resource-group-name> `
- --name cosmicworks
-```
-
-The command line to create a database is for PowerShell, shown on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line.
-
-## [Connection String](#tab/connection-string)
-
-Use the [`CosmosClient.create_database_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient#azure-cosmos-cosmos-client-cosmosclient-create-database-if-not-exists) method to create a new database if it doesn't already exist. This method will return a [`DatabaseProxy`](/python/api/azure-cosmos/azure.cosmos.databaseproxy) reference to the existing or newly created database.
----
-### Create a container
-
-## [Passwordless (Recommended)](#tab/passwordless)
-
-The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations such as creating and deleting databases you must use RBAC through one of the following options:
-> - [Azure CLI scripts](manage-with-cli.md)
-> - [Azure PowerShell scripts](manage-with-powershell.md)
-> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
-> - [Azure Resource Manager Python client library](https://pypi.org/project/azure-mgmt-cosmosdb)
-
-The Azure CLI approach is used in this example. Use the [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) command to create a Cosmos DB container.
-
-```azurecli
-# Create a SQL API container
-az cosmosdb sql container create `
- --account-name <cosmos-db-account-name> `
- --resource-group <resource-group-name> `
- --database-name cosmicworks `
- --partition-key-path "/categoryId" `
- --name products
-```
+This sample creates a new instance of the `CosmosClient` type and authenticates using a `DefaultAzureCredential` instance.
-The command line to create a container is for PowerShell, on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line. For Bash, you'll also need to add `MSYS_NO_PATHCONV=1` before the command so that Bash deals with the partition key parameter correctly.
-After the resources have been created, use classes from the `Microsoft.Azure.Cosmos` client libraries to connect to and query the database.
+### Get a database
-## [Connection String](#tab/connection-string)
+Use `client.get_database_client` to retrieve the existing database named *`cosmicworks`*.
-The [`PartitionKey`](/python/api/azure-cosmos/azure.cosmos.partitionkey) class defines a partition key path that you can use when creating a container.
+### Get a container
-
+Retrieve the existing *`products`* container using `database.get_container_client`.
-The [`Databaseproxy.create_container_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.databaseproxy#azure-cosmos-databaseproxy-create-container-if-not-exists) method will create a new container if it doesn't already exist. This method will also return a [`ContainerProxy`](/python/api/azure-cosmos/azure.cosmos.containerproxy) reference to the container.
### Create an item
-Create a new item in the container by first creating a new variable (`new_item`) with a sample item defined. In this example, the unique identifier of this item is `70b63682-b93a-4c77-aad2-65501347265f`. The partition key value is derived from the `/categoryId` path, so it would be `61dba35b-4f02-45c5-b648-c6badc0cbd79`.
-
-#### [Sync / Async](#tab/sync+async)
----
-> [!TIP]
-> The remaining fields are flexible and you can define as many or as few as you want. You can even combine different item schemas in the same container.
+Build a new object with all of the members you want to serialize into JSON. In this example, the type has a unique identifier, and fields for category, name, quantity, price, and sale. Create an item in the container using `container.upsert_item`. This method "upserts" the item effectively replacing the item if it already exists.
-Create an item in the container by using the [`ContainerProxy.create_item`](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-create-item) method passing in the variable you already created.
-#### [Sync](#tab/sync)
+### Read an item
-
-#### [Async](#tab/async)
----
-### Get an item
-
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [`ContainerProxy.read_item`](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item) passing in both values to return an item as a dictionary of strings and values (`dict[str, Any]`).
-
-#### [Sync](#tab/sync)
--
-#### [Async](#tab/async)
---
+Perform a point read operation by using both the unique identifier (`id`) and partition key fields. Use `container.read_item` to efficiently retrieve the specific item.
-In this example, the dictionary result is saved to a variable named `existing_item`.
### Query items
-After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM products p WHERE p.categoryId = "61dba35b-4f02-45c5-b648-c6badc0cbd79"``. This example uses query parameterization to construct the query. The query uses a string of the SQL query, and a dictionary of query parameters.
+Perform a query over multiple items in a container using `container.GetItemQueryIterator`. Find all items within a specified category using this parameterized query:
-#### [Sync / Async](#tab/sync+async)
----
-This example dictionary included the `@categoryId` query parameter and the corresponding value `61dba35b-4f02-45c5-b648-c6badc0cbd79`.
-
-Once the query is defined, call [`ContainerProxy.query_items`](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) to run the query and return the results as a paged set of items (`ItemPage[Dict[str, Any]]`).
-
-#### [Sync / Async](#tab/sync+async)
----
-Finally, use a for loop to iterate over the results in each page and perform various actions.
-
-#### [Sync](#tab/sync)
--
-#### [Async](#tab/async)
----
-In this example, `json.dumps` is used to print the item to the console in a human-readable way.
-
-## Run the code
-
-This app creates an API for NoSQL database and container. The example then creates an item and then reads the exact same item back. Finally, the example issues a query that should only return that single item. At the final step, the example outputs the final item to the console.
-
-Use a terminal to navigate to the application directory and run the application.
-
-```bash
-python app.py
+```nosql
+SELECT * FROM products p WHERE p.category = @category
```
-The output of the app should be similar to this example:
-
-```output
-Database cosmicworks
-Container products
-Point read Yamba Surfboard
-Result list [
- {
- "id": "70b63682-b93a-4c77-aad2-65501347265f",
- "categoryId": "61dba35b-4f02-45c5-b648-c6badc0cbd79",
- "categoryName": "gear-surf-surfboards",
- "name": "Yamba Surfboard",
- "quantity": 12,
- "sale": false,
- "_rid": "KSsMAPI2fH0BAAAAAAAAAA==",
- "_self": "dbs/KSsMAA==/colls/KSsMAPI2fH0=/docs/KSsMAPI2fH0BAAAAAAAAAA==/",
- "_etag": "\"48002b76-0000-0200-0000-63c85f9d0000\"",
- "_attachments": "attachments/",
- "_ts": 1674076061
- }
-]
-```
+
+Loop through the results of the query.
-> [!NOTE]
-> The fields assigned by Azure Cosmos DB will vary from this sample output.
-## Clean up resources
+## Related content
+- [.NET Quickstart](quickstart-dotnet.md)
+- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Java Quickstart](quickstart-java.md)
+- [Go Quickstart](quickstart-go.md)
-## Next steps
+## Next step
-In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a database, and create a container using the Python SDK. You can now dive deeper into guidance on how to import your data into the API for NoSQL.
+> [!div class="nextstepaction"]
+> [PyPI package](https://pypi.org/project/azure-cosmos)
cosmos-db Quickstart Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-spark.md
- Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL
-description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL that you can use to connect to and query data in your Azure Cosmos DB account
---- Previously updated : 03/01/2022-----
-# Quickstart: Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL
-
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
->
-
-This tutorial is a quick start guide to show how to use Azure Cosmos DB Spark Connector to read from or write to Azure Cosmos DB. Azure Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x and 3.3.x.
-
-Throughout this quick tutorial, we rely on [Azure Databricks Runtime 12.2 with Spark 3.3.2](/azure/databricks/release-notes/runtime/12.2) and a Jupyter Notebook to show how to use the Azure Cosmos DB Spark Connector.
-
-You should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
-
-## Prerequisites
-
-* An Azure account with an active subscription.
-
- * No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-
-* [Azure Databricks](/azure/databricks/release-notes/runtime/12.2) runtime 12.2 with Spark 3.3.2
-
-* (Optional) [SLF4J binding](https://www.slf4j.org/manual.html) is used to associate a specific logging framework with SLF4J.
-
-SLF4J is only needed if you plan to use logging, also download an SLF4J binding, which will link the SLF4J API with the logging implementation of your choice. See the [SLF4J user manual](https://www.slf4j.org/manual.html) for more information.
-
-Install Azure Cosmos DB Spark Connector in your spark cluster [using the latest version for Spark 3.3.x](https://aka.ms/azure-cosmos-spark-3-3-download).
-
-The getting started guide is based on PySpark/Scala and you can run the following code snippet in an Azure Databricks PySpark/Scala notebook.
-
-## Create databases and containers
-
-First, set Azure Cosmos DB account credentials, and the Azure Cosmos DB Database name and container name.
-
-#### [Python](#tab/python)
-
-```python
-cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
-cosmosMasterKey = "REPLACEME"
-cosmosDatabaseName = "sampleDB"
-cosmosContainerName = "sampleContainer"
-
-cfg = {
- "spark.cosmos.accountEndpoint" : cosmosEndpoint,
- "spark.cosmos.accountKey" : cosmosMasterKey,
- "spark.cosmos.database" : cosmosDatabaseName,
- "spark.cosmos.container" : cosmosContainerName,
-}
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-val cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
-val cosmosMasterKey = "REPLACEME"
-val cosmosDatabaseName = "sampleDB"
-val cosmosContainerName = "sampleContainer"
-
-val cfg = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName
-)
-```
--
-Next, you can use the new Catalog API to create an Azure Cosmos DB Database and Container through Spark.
-
-#### [Python](#tab/python)
-
-```python
-# Configure Catalog Api to be used
-spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
-spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
-spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
-
-# create an Azure Cosmos DB database using catalog api
-spark.sql("CREATE DATABASE IF NOT EXISTS cosmosCatalog.{};".format(cosmosDatabaseName))
-
-# create an Azure Cosmos DB container using catalog api
-spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')".format(cosmosDatabaseName, cosmosContainerName))
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-// Configure Catalog Api to be used
-spark.conf.set(s"spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
-spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
-spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
-
-// create an Azure Cosmos DB database using catalog api
-spark.sql(s"CREATE DATABASE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName};")
-
-// create an Azure Cosmos DB container using catalog api
-spark.sql(s"CREATE TABLE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')")
-```
--
-When creating containers with the Catalog API, you can set the throughput and [partition key path](../partitioning-overview.md#choose-partitionkey) for the container to be created.
-
-For more information, see the full [Catalog API](https://github.com/Azure/azure-sdk-for-jav) documentation.
-
-## Ingest data
-
-The name of the data source is `cosmos.oltp`, and the following example shows how you can write a memory dataframe consisting of two items to Azure Cosmos DB:
-
-#### [Python](#tab/python)
-
-```python
-spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "Schrodinger cat", 2, False)))\
- .toDF("id","name","age","isAlive") \
- .write\
- .format("cosmos.oltp")\
- .options(**cfg)\
- .mode("APPEND")\
- .save()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-spark.createDataFrame(Seq(("cat-alive", "Schrodinger cat", 2, true), ("cat-dead", "Schrodinger cat", 2, false)))
- .toDF("id","name","age","isAlive")
- .write
- .format("cosmos.oltp")
- .options(cfg)
- .mode("APPEND")
- .save()
-```
--
-Note that `id` is a mandatory field for Azure Cosmos DB.
-
-For more information related to ingesting data, see the full [write configuration](https://github.com/Azure/azure-sdk-for-jav#write-config) documentation.
-
-## Query data
-
-Using the same `cosmos.oltp` data source, we can query data and use `filter` to push down filters:
-
-#### [Python](#tab/python)
-
-```python
-from pyspark.sql.functions import col
-
-df = spark.read.format("cosmos.oltp").options(**cfg)\
- .option("spark.cosmos.read.inferSchema.enabled", "true")\
- .load()
-
-df.filter(col("isAlive") == True)\
- .show()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-import org.apache.spark.sql.functions.col
-
-val df = spark.read.format("cosmos.oltp").options(cfg).load()
-
-df.filter(col("isAlive") === true)
- .withColumn("age", col("age") + 1)
- .show()
-```
--
-For more information related to querying data, see the full [query configuration](https://github.com/Azure/azure-sdk-for-jav#query-config) documentation.
-
-## Partial document update using Patch
-
-Using the same `cosmos.oltp` data source, we can do partial update in Azure Cosmos DB using Patch API:
-
-#### [Python](#tab/python)
-
-```python
-cfgPatch = {"spark.cosmos.accountEndpoint": cosmosEndpoint,
- "spark.cosmos.accountKey": cosmosMasterKey,
- "spark.cosmos.database": cosmosDatabaseName,
- "spark.cosmos.container": cosmosContainerName,
- "spark.cosmos.write.strategy": "ItemPatch",
- "spark.cosmos.write.bulk.enabled": "false",
- "spark.cosmos.write.patch.defaultOperationType": "Set",
- "spark.cosmos.write.patch.columnConfigs": "[col(name).op(set)]"
- }
-
-id = "<document-id>"
-query = "select * from cosmosCatalog.{}.{} where id = '{}';".format(
- cosmosDatabaseName, cosmosContainerName, id)
-
-dfBeforePatch = spark.sql(query)
-print("document before patch operation")
-dfBeforePatch.show()
-
-data = [{"id": id, "name": "Joel Brakus"}]
-patchDf = spark.createDataFrame(data)
-
-patchDf.write.format("cosmos.oltp").mode("Append").options(**cfgPatch).save()
-
-dfAfterPatch = spark.sql(query)
-print("document after patch operation")
-dfAfterPatch.show()
-```
-
-For more samples related to partial document update, see the GitHub code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/patch-sample.py).
--
-#### [Scala](#tab/scala)
-
-```scala
-val cfgPatch = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName,
- "spark.cosmos.write.strategy" -> "ItemPatch",
- "spark.cosmos.write.bulk.enabled" -> "false",
-
- "spark.cosmos.write.patch.columnConfigs" -> "[col(name).op(set)]"
- )
-
-val id = "<document-id>"
-val query = s"select * from cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} where id = '$id';"
-
-val dfBeforePatch = spark.sql(query)
-println("document before patch operation")
-dfBeforePatch.show()
-val patchDf = Seq(
- (id, "Joel Brakus")
- ).toDF("id", "name")
-
-patchDf.write.format("cosmos.oltp").mode("Append").options(cfgPatch).save()
-val dfAfterPatch = spark.sql(query)
-println("document after patch operation")
-dfAfterPatch.show()
-```
-
-For more samples related to partial document update, see the GitHub code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/PatchSample.scala).
---
-## Schema inference
-
-When querying data, the Spark Connector can infer the schema based on sampling existing items by setting `spark.cosmos.read.inferSchema.enabled` to `true`.
-
-#### [Python](#tab/python)
-
-```python
-df = spark.read.format("cosmos.oltp").options(**cfg)\
- .option("spark.cosmos.read.inferSchema.enabled", "true")\
- .load()
-
-df.printSchema()
--
-# Alternatively, you can pass the custom schema you want to be used to read the data:
-
-customSchema = StructType([
- StructField("id", StringType()),
- StructField("name", StringType()),
- StructField("type", StringType()),
- StructField("age", IntegerType()),
- StructField("isAlive", BooleanType())
- ])
-
-df = spark.read.schema(customSchema).format("cosmos.oltp").options(**cfg)\
- .load()
-
-df.printSchema()
-
-# If no custom schema is specified and schema inference is disabled, then the resulting data will be returning the raw Json content of the items:
-
-df = spark.read.format("cosmos.oltp").options(**cfg)\
- .load()
-
-df.printSchema()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-val cfgWithAutoSchemaInference = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName,
- "spark.cosmos.read.inferSchema.enabled" -> "true"
-)
-
-val df = spark.read.format("cosmos.oltp").options(cfgWithAutoSchemaInference).load()
-df.printSchema()
-
-df.show()
-```
--
-For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation.
-
-## Raw JSON support for Spark Connector
- When working with Cosmos DB, you may come across documents that contain an array of entries with potentially different structures. These documents typically have an array called "tags" that contains items with varying structures, along with a "tag_id" field that serves as an entity type identifier. To handle patching operations efficiently in Spark, you can use a custom function that handles the patching of such documents.
-
-**Sample document that can be used**
--
-```
-{
- "id": "Test01",
- "document_type": "tag",
- "tags": [
- {
- "tag_id": "key_val",
- "params": "param1=val1;param2=val2"
- },
- {
- "tag_id": "arrays",
- "tags": "tag1,tag2,tag3"
- }
- ]
-}
-```
-
-#### [Python](#tab/python)
-
-```python
-
-def init_sequences_db_config():
- #Configure Config for Cosmos DB Patch and Query
- global cfgSequencePatch
- cfgSequencePatch = {"spark.cosmos.accountEndpoint": cosmosEndpoint,
- "spark.cosmos.accountKey": cosmosMasterKey,
- "spark.cosmos.database": cosmosDatabaseName,
- "spark.cosmos.container": cosmosContainerNameTarget,
- "spark.cosmos.write.strategy": "ItemPatch", # Partial update all documents based on the patch config
- "spark.cosmos.write.bulk.enabled": "true",
- "spark.cosmos.write.patch.defaultOperationType": "Replace",
- "spark.cosmos.read.inferSchema.enabled": "false"
- }
-
-def adjust_tag_array(rawBody):
- print("test adjust_tag_array")
- array_items = json.loads(rawBody)["tags"]
- print(json.dumps(array_items))
-
- output_json = [{}]
-
- for item in array_items:
- output_json_item = {}
- # Handle different tag types
- if item["tag_id"] == "key_val":
- output_json_item.update({"tag_id" : item["tag_id"]})
- params = item["params"].split(";")
- for p in params:
- key_val = p.split("=")
- element = {key_val[0]: key_val[1]}
- output_json_item.update(element)
-
- if item["tag_id"] == "arrays":
- tags_array = item["tags"].split(",")
- output_json_item.update({"tags": tags_array})
-
- output_json.append(output_json_item)
-
- # convert to raw json
- return json.dumps(output_json)
--
-init_sequences_db_config()
-
-native_query = "SELECT c.id, c.tags, c._ts from c where EXISTS(SELECT VALUE t FROM t IN c.tags WHERE IS_DEFINED(t.tag_id))".format()
-
-# the custom query will be processed against the Cosmos endpoint
-cfgSequencePatch["spark.cosmos.read.customQuery"] = native_query
-# Cosmos DB patch column configs
-cfgSequencePatch["spark.cosmos.write.patch.columnConfigs"] = "[col(tags_new).path(/tags).op(set).rawJson]"
-
-# load df
-df_relevant_sequences = spark.read.format("cosmos.oltp").options(**cfgSequencePatch).load()
-print(df_relevant_sequences)
-df_relevant_sequences.show(20, False)
-if not df_relevant_sequences.isEmpty():
- print("Found sequences to patch")
-
- # prepare udf function
- tags_udf= udf(lambda a: adjust_tag_array(a), StringType())
-
- df_relevant_sequences.show(20, False)
-
- # apply udf function for patching raw json
- df_relevant_sequences_adjusted = df_relevant_sequences.withColumn("tags_new", tags_udf("_rawBody"))
- df_relevant_sequences_adjusted.show(20, False)
-
- # write df
- output_df = df_relevant_sequences_adjusted.select("id","tags_new")
- output_df.write.format("cosmos.oltp").mode("Append").options(**cfgSequencePatch).save()
-
-```
-#### [Scala](#tab/scala)
-```scala
-var cfgSequencePatch = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName,
- "spark.cosmos.write.strategy" -> "ItemPatch", // Partial update all documents based on the patch config
- "spark.cosmos.write.bulk.enabled" -> "false",
- "spark.cosmos.write.patch.defaultOperationType" -> "Replace",
- "spark.cosmos.read.inferSchema.enabled" -> "false"
-)
-
-def patchTags(rawJson: String): String = {
- implicit val formats = DefaultFormats
- val json = JsonMethods.parse(rawJson)
- val tagsArray = (json \ "tags").asInstanceOf[JArray]
- var outList = new ListBuffer[Map[String, Any]]
-
- tagsArray.arr.foreach { tag =>
- val tagId = (tag \ "tag_id").extract[String]
- var outMap = Map.empty[String, Any]
-
- // Handle different tag types
- tagId match {
- case "key_val" =>
- val params = (tag \ "params").extract[String].split(";")
- for (p <- params) {
- val paramVal = p.split("=")
- outMap += paramVal(0) -> paramVal(1)
- }
- case "arrays" =>
- val tags = (tag \ "tags").extract[String]
- val tagList = tags.split(",")
- outMap += "arrays" -> tagList
- case _ => {}
- }
- outList += outMap
- }
- // convert to raw json
- write(outList)
-}
-
-val nativeQuery = "SELECT c.id, c.tags, c._ts from c where EXISTS(SELECT VALUE t FROM t IN c.tags WHERE IS_DEFINED(t.tag_id))"
-
-// the custom query will be processed against the Cosmos endpoint
-cfgSequencePatch += "spark.cosmos.read.customQuery" -> nativeQuery
-
-//Cosmos DB patch column configs
-cfgSequencePatch += "spark.cosmos.write.patch.columnConfigs" -> "[col(tags_new).path(/tags).op(set).rawJson]"
-
-// load df
-val dfRelevantSequences = spark.read.format("cosmos.oltp").options(cfgSequencePatch).load()
-dfRelevantSequences.show(20, false)
-
-if(!dfRelevantSequences.isEmpty){
- println("Found sequences to patch")
-
- // prepare udf function
- val patchTagsUDF = udf(patchTags _)
-
- // apply udf function for patching raw json
- val dfRelevantSequencesAdjusted = dfRelevantSequences.withColumn("tags_new", patchTagsUDF(dfRelevantSequences("_rawBody")))
-
- dfRelevantSequencesAdjusted.show(20, false)
-
- var outputDf = dfRelevantSequencesAdjusted.select("id","tags_new")
-
- // write df
- outputDf.write.format("cosmos.oltp").mode("Append").options(cfgSequencePatch).save()
-}
-
-```
---
-## Hierarchical Partition Keys
-
-You can also use the Spark Connector to create containers with [hierarchical partition keys](../hierarchical-partition-keys.md) in Azure Cosmos DB. Here we create a new container with hierarchical partition keys defined using the existing database from the above samples, ingest some data, then query using the first two levels in the hierarchy.
-
-#### [Python](#tab/python)
-
-```python
-from pyspark.sql.types import StringType
-from pyspark.sql.functions import udf
-
-# create an Azure Cosmos DB container with hierarchical partitioning using catalog api
-cosmosHierarchicalContainerName = "HierarchicalPartitionKeyContainer"
-spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/tenantId,/userId,/sessionId', manualThroughput = '1100')".format(cosmosDatabaseName, cosmosHierarchicalContainerName))
-
-cfg = {
- "spark.cosmos.accountEndpoint" : cosmosEndpoint,
- "spark.cosmos.accountKey" : cosmosMasterKey,
- "spark.cosmos.database" : cosmosDatabaseName,
- "spark.cosmos.container" : cosmosHierarchicalContainerName,
- "spark.cosmos.read.partitioning.strategy" : "Restrictive"
-}
-
-#ingest some data
-spark.createDataFrame((("id1", "tenant 1", "User 1", "session 1"), ("id2", "tenant 1", "User 1", "session 1"), ("id3", "tenant 2", "User 1", "session 1"))) \
- .toDF("id","tenantId","userId","sessionId") \
- .write \
- .format("cosmos.oltp") \
- .options(**cfg) \
- .mode("APPEND") \
- .save()
-
-#query by filtering the first two levels in the hierarchy without feedRangeFilter - this is less efficient as it will go through all physical partitions
-query_df = spark.read.format("cosmos.oltp").options(**cfg) \
-.option("spark.cosmos.read.customQuery" , "SELECT * from c where c.tenantId = 'tenant 1' and c.userId = 'User 1'").load()
-query_df.show()
-
-# prepare feed range to filter on first two levels in the hierarchy
-spark.udf.registerJavaFunction("GetFeedRangeForPartitionKey", "com.azure.cosmos.spark.udf.GetFeedRangeForHierarchicalPartitionKeyValues", StringType())
-pkDefinition = "{\"paths\":[\"/tenantId\",\"/userId\",\"/sessionId\"],\"kind\":\"MultiHash\"}"
-pkValues = "[\"tenant 1\", \"User 1\"]"
-feedRangeDf = spark.sql(f"SELECT GetFeedRangeForPartitionKey('{pkDefinition}', '{pkValues}')")
-feedRange = feedRangeDf.collect()[0][0]
-
-# query by filtering the first two levels in the hierarchy using feedRangeFilter (will target the physical partition in which all sub-partitions are co-located)
-query_df = spark.read.format("cosmos.oltp").options(**cfg).option("spark.cosmos.partitioning.feedRangeFilter",feedRange).load()
-query_df.show()
-```
-
-#### [Scala](#tab/scala)
-
-```scala
-import com.azure.cosmos.spark.udf.{GetFeedRangeForHierarchicalPartitionKeyValues}
-import org.apache.spark.sql.types._
-
-//create an Azure Cosmos DB container with hierarchical partitioning using catalog api
-val cosmosHierarchicalContainerName = "HierarchicalPartitionKeyContainer"
-spark.sql(s"CREATE TABLE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName}.${cosmosHierarchicalContainerName} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/tenantId,/userId,/sessionId', manualThroughput = '1100')")
-
-//ingest some data
-val cfg = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.accountKey" -> cosmosMasterKey,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosHierarchicalContainerName,
- "spark.cosmos.read.partitioning.strategy" -> "Restrictive"
-)
-spark.createDataFrame(Seq(("id1", "tenant 1", "User 1", "session 1"), ("id2", "tenant 1", "User 1", "session 1"), ("id3", "tenant 2", "User 1", "session 1")))
- .toDF("id","tenantId","userId","sessionId")
- .write
- .format("cosmos.oltp")
- .options(cfg)
- .mode("APPEND")
- .save()
-
-//query by filtering the first two levels in the hierarchy without feedRangeFilter - this is less efficient as it will go through all physical partitions
-val query1 = cfg + ("spark.cosmos.read.customQuery" -> "SELECT * from c where c.tenantId = 'tenant 1' and c.userId = 'User 1'")
-val query_df1 = spark.read.format("cosmos.oltp").options(query1).load()
-query_df1.show
-
-//prepare feed range filter to filter on first two levels in the hierarchy
-spark.udf.register("GetFeedRangeForPartitionKey", new GetFeedRangeForHierarchicalPartitionKeyValues(), StringType)
-val pkDefinition = "{\"paths\":[\"/tenantId\",\"/userId\",\"/sessionId\"],\"kind\":\"MultiHash\"}"
-val pkValues = "[\"tenant 1\", \"User 1\"]"
-val feedRangeDf = spark.sql(s"SELECT GetFeedRangeForPartitionKey('$pkDefinition', '$pkValues')")
-val feedRange = feedRangeDf.collect()(0).getAs[String](0)
-
-//filtering the first two levels in the hierarchy using feedRangeFilter (will target the physical partition in which all sub-partitions are co-located)
-val query2 = cfg + ("spark.cosmos.partitioning.feedRangeFilter" -> feedRange)
-val query_df2 = spark.read.format("cosmos.oltp").options(query2).load()
-query_df2.show
-```
--
-## Configuration reference
-
-The Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL has a complete configuration reference that provides more advanced settings for writing and querying data, serialization, streaming using change feed, partitioning and throughput management and more. For a complete listing with details, see our [Spark Connector Configuration Reference](https://aka.ms/azure-cosmos-spark-3-config) on GitHub.
--
-<a name='azure-active-directory-authentication'></a>
-
-## Microsoft Entra authentication
-
-1. Following the instructions on how to [register an application with Microsoft Entra ID and create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
-
-1. You should still be in Azure portal > Microsoft Entra ID > App Registrations. In the `Certificates & secrets` section, create a new secret. Save the value for later.
-
-1. Click on the overview tab and find the values for `clientId` and `tenantId`, along with `clientSecret` that you created earlier, and save these in a file for later. Also record your `cosmosEndpoint`, `subscriptionId`, and `resourceGroupName`from your Azure Cosmos DB account.
-
-1. Create a role using the `az role definition create` command. Pass in the Cosmos DB account name and resource group, followed by a body of JSON that defines the custom role. The following example creates a role named `SparkConnectorAAD` with permissions to read and write items in Cosmos DB containers. The role is also scoped to the account level using `/`.
-
- ```azurecli-interactive
- resourceGroupName='<myResourceGroup>'
- accountName='<myCosmosAccount>'
- az cosmosdb sql role definition create \
- --account-name $accountName \
- --resource-group $resourceGroupName \
- --body '{
- "RoleName": "SparkConnectorAAD",
- "Type": "CustomRole",
- "AssignableScopes": ["/"],
- "Permissions": [{
- "DataActions": [
- "Microsoft.DocumentDB/databaseAccounts/readMetadata",
- "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*",
- "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*"
- ]
- }]
- }'
- ```
-
-1. Now list the role definition you created to fetch its ID:
-
- ```azurecli-interactive
- az cosmosdb sql role definition list --account-name $accountName --resource-group $resourceGroupName
- ```
-
-1. This should bring back a response like the below. Record the `id` value.
-
- ```json
- [
- {
- "assignableScopes": [
- "/subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.DocumentDB/databaseAccounts/<myCosmosAccount>"
- ],
- "id": "/subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.DocumentDB/databaseAccounts/<myCosmosAccount>/sqlRoleDefinitions/<roleDefinitionId>",
- "name": "<roleDefinitionId>",
- "permissions": [
- {
- "dataActions": [
- "Microsoft.DocumentDB/databaseAccounts/readMetadata",
- "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*",
- "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*"
- ],
- "notDataActions": []
- }
- ],
- "resourceGroup": "<myResourceGroup>",
- "roleName": "MyReadWriteRole",
- "sqlRoleDefinitionGetResultsType": "CustomRole",
- "type": "Microsoft.DocumentDB/databaseAccounts/sqlRoleDefinitions"
- }
- ]
- ```
-
-1. Now go to Azure portal > Microsoft Entra ID > **Enterprise Applications** and search for the application you created earlier. Record the Object ID found here.
-
- > [!NOTE]
- > Make sure to use its Object ID as found in the **Enterprise applications** section of the Microsoft Entra admin center blade (and not the App registrations section you used earlier).
-
-1. Now create a role assignment. Replace the `<aadPrincipalId>` with Object ID you recorded above (note this is NOT the same as Object ID visible from the app registrations view you saw earlier). Also replace `<myResourceGroup>` and `<myCosmosAccount>` accordingly in the below. Replace `<roleDefinitionId>` with the `id` value fetched from running the `az cosmosdb sql role definition list` command you ran above. Then run in Azure CLI:
-
- ```azurecli-interactive
- resourceGroupName='<myResourceGroup>'
- accountName='<myCosmosAccount>'
- readOnlyRoleDefinitionId='<roleDefinitionId>' # as fetched above
- # For Service Principals make sure to use the Object ID as found in the Enterprise applications section of the Azure Active Directory portal blade.
- principalId='<aadPrincipalId>'
- az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId
- ```
-
-1. Now that you have created a Microsoft Entra application and service principal, created a custom role, and assigned that role permissions to your Cosmos DB account, you should be able to run a notebook. Create a notebook as below and replace the configurations with the appropriate values that you recorded earlier in step 3:
--
- #### [Python](#tab/python)
-
- ```python
- cosmosDatabaseName = "AADsampleDB"
- cosmosContainerName = "sampleContainer"
- authType = "ServicePrinciple"
- cosmosEndpoint = "<replace with URI of your Cosmos DB account>"
- subscriptionId = "<replace with subscriptionId>"
- tenantId = "<replace with Directory (tenant) ID from the portal>"
- resourceGroupName = "<replace with the resourceGroup name>"
- clientId = "<replace with Application (client) ID from the portal>"
- clientSecret = "<replace with application secret value you created earlier>"
-
- cfg = {
- "spark.cosmos.accountEndpoint" : cosmosEndpoint,
- "spark.cosmos.auth.type" : authType,
- "spark.cosmos.account.subscriptionId" : subscriptionId,
- "spark.cosmos.account.tenantId" : tenantId,
- "spark.cosmos.account.resourceGroupName" : resourceGroupName,
- "spark.cosmos.auth.aad.clientId" : clientId,
- "spark.cosmos.auth.aad.clientSecret" : clientSecret,
- "spark.cosmos.database" : cosmosDatabaseName,
- "spark.cosmos.container" : cosmosContainerName
- }
-
- # Configure Catalog Api to be used
- spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.auth.type", authType)
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.account.subscriptionId", subscriptionId)
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.account.tenantId", tenantId)
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.account.resourceGroupName", resourceGroupName)
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.auth.aad.clientId", clientId)
- spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.auth.aad.clientSecret", clientSecret)
-
- # create an Azure Cosmos DB database using catalog api
- spark.sql("CREATE DATABASE IF NOT EXISTS cosmosCatalog.{};".format(cosmosDatabaseName))
-
- # create an Azure Cosmos DB container using catalog api
- spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')".format(cosmosDatabaseName, cosmosContainerName))
-
- spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "Schrodinger cat", 2, False)))\
- .toDF("id","name","age","isAlive") \
- .write\
- .format("cosmos.oltp")\
- .options(**cfg)\
- .mode("APPEND")\
- .save()
-
- ```
-
- #### [Scala](#tab/scala)
-
- ```scala
- val cosmosDatabaseName = "AADsampleDB"
- val cosmosContainerName = "sampleContainer"
- val authType = "ServicePrinciple"
- val cosmosEndpoint = "<replace with URI of your Cosmos DB account>"
- val subscriptionId = "<replace with subscriptionId>"
- val tenantId = "<replace with Directory (tenant) ID from the portal>"
- val resourceGroupName = "<replace with the resourceGroup name>"
- val clientId = "<replace with Application (client) ID from the portal>"
- val clientSecret = "<replace with application secret value you created earlier>"
-
- val cfg = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
- "spark.cosmos.auth.type" -> authType,
- "spark.cosmos.account.subscriptionId" -> subscriptionId,
- "spark.cosmos.account.tenantId" -> tenantId,
- "spark.cosmos.account.resourceGroupName" -> resourceGroupName,
- "spark.cosmos.auth.aad.clientId" -> clientId,
- "spark.cosmos.auth.aad.clientSecret" -> clientSecret,
- "spark.cosmos.database" -> cosmosDatabaseName,
- "spark.cosmos.container" -> cosmosContainerName
- )
-
- // Configure Catalog Api to be used
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.auth.type", authType)
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.account.subscriptionId", subscriptionId)
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.account.tenantId", tenantId)
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.account.resourceGroupName", resourceGroupName)
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.auth.aad.clientId", clientId)
- spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.auth.aad.clientSecret", clientSecret)
-
- // create an Azure Cosmos DB database using catalog api
- spark.sql(s"CREATE DATABASE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName};")
-
- // create an Azure Cosmos DB container using catalog api
- spark.sql(s"CREATE TABLE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')")
-
- spark.createDataFrame(Seq(("cat-alive", "Schrodinger cat", 2, true), ("cat-dead", "Schrodinger cat", 2, false)))
- .toDF("id","name","age","isAlive")
- .write
- .format("cosmos.oltp")
- .options(cfg)
- .mode("APPEND")
- .save()
- ```
-
-
- > [!TIP]
- > In this quickstart example credentials are assigned to variables in clear-text, but for security we recommend the usage of secrets. Review instructions on how to secure credentials in Azure Synapse Apache Spark with [linked services using the TokenLibrary](../../synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md). Or if using Databricks, review how to create an [Azure Key Vault backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope) or a [Databricks backed secret scope](/azure/databricks/security/secrets/secret-scopes#create-a-databricks-backed-secret-scope). For configuring secrets, review how to [add secrets to your Spark configuration](/azure/databricks/security/secrets/secrets#read-a-secret).
-
-## Migrate to Spark 3 Connector
-
-If you are using our older Spark 2.4 Connector, you can find out how to migrate to the Spark 3 Connector [here](https://github.com/Azure/azure-sdk-for-jav).
-
-## Next steps
-
-* Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL: [Release notes and resources](sdk-java-spark-v3.md)
-* Learn more about [Apache Spark](https://spark.apache.org/).
-* Learn how to configure [throughput control](throughput-control-spark.md).
-* Check out more [samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md
The [Spark Connector](quickstart-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control. > [!TIP]
-> This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](./sdk-java-v4.md). In the SDK, you can also use both global and local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at documentation on how to [use throughput control](quickstart-java.md#use-throughput-control) in the Java SDK.
+> This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](./sdk-java-v4.md). In the SDK, you can also use both global and local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at documentation on how to [use throughput control](quickstart-java.md) in the Java SDK.
> [!WARNING] > Please note that throughput control is not yet supported for gateway mode.
cosmos-db Provision Throughput Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-throughput-autoscale.md
Azure Cosmos DB databases and containers that are configured with autoscale prov
* **Simple:** Autoscale removes the complexity of managing RU/s with custom scripting or manually scaling capacity.
-* **Scalable:** Databases and containers automatically scale the provisioned throughput as needed. There is no disruption to client connections, applications, or impact to Azure Cosmos DB SLAs.
+* **Scalable:** Databases and containers automatically scale the provisioned throughput as needed. There's no disruption to client connections, applications, or impact to Azure Cosmos DB SLAs.
* **Cost-effective:** Autoscale helps optimize your RU/s usage and cost usage by scaling down when not in use. You only pay for the resources that your workloads need on a per-hour basis. Of all hours in a month, if you set autoscale max RU/s(Tmax) and use the full amount Tmax for 66% of the hours or less, you'll save with autoscale. To learn more, see the [how to choose between standard (manual) and autoscale provisioned throughput](how-to-choose-offer.md) article.
The use cases of autoscale include:
* **New applications:** If you're developing a new application and not sure about the throughput (RU/s) you need, autoscale makes it easy to get started. You can start with the autoscale entry point of 100 - 1000 RU/s, monitor your usage, and determine the right RU/s over time.
-* **Infrequently used applications:** If you have an application that's only used for a few hours several times a day, week, or month ΓÇö such as a low-volume application/web/blog site ΓÇö autoscale adjusts the capacity to handle peak usage and scales down when it's over.
+* **Infrequently used applications:** If you have an application which is only used for a few hours several times a day, week, or month ΓÇö such as a low-volume application/web/blog site ΓÇö autoscale adjusts the capacity to handle peak usage and scales down when it's over.
* **Development and test workloads:** If you or your team use Azure Cosmos DB databases and containers during work hours, but don't need them on nights or weekends, autoscale helps save cost by scaling down to a minimum when not in use.
-* **Scheduled production workloads/queries:** If you have a series of scheduled requests, operations, or queries that you want to run during idle periods, you can do that easily with autoscale. When you need to run the workload, the throughput will automatically scale to what's needed and scale down afterward.
+* **Scheduled production workloads/queries:** If you have a series of scheduled requests, operations, or queries that you want to run during idle periods, you can do that easily with autoscale. When you need to run the workload, the throughput automatically scales to needed value and scales down afterward.
Building a custom solution to these problems not only requires an enormous amount of time, but also introduces complexity in your application's configuration or code. Autoscale enables the above scenarios out of the box and removes the need for custom or manual scaling of capacity. ## How autoscale provisioned throughput works
-When configuring containers and databases with autoscale, you specify the maximum throughput `Tmax` required. Azure Cosmos DB scales the throughput `T` such `0.1*Tmax <= T <= Tmax`. For example, if you set the maximum throughput to 20,000 RU/s, the throughput will scale between 2000 to 20,000 RU/s. Because scaling is automatic and instantaneous, at any point in time, you can consume up to the provisioned `Tmax` with no delay.
+When configuring containers and databases with autoscale, you specify the maximum throughput `Tmax` required. Azure Cosmos DB scales the throughput `T` such `0.1*Tmax <= T <= Tmax`. For example, if you set the maximum throughput to 20,000 RU/s, the throughput scales between 2000 to 20,000 RU/s. Because scaling is automatic and instantaneous, at any point in time, you can consume up to the provisioned `Tmax` with no delay.
-Each hour, you will be billed for the highest throughput `T` the system scaled to within the hour.
+Each hour, you'll be billed for the highest throughput `T` the system scaled to within the hour.
The entry point for autoscale maximum throughput `Tmax` starts at 1000 RU/s, which scales between 100 - 1000 RU/s. You can set `Tmax` in increments of 1000 RU/s and change the value at any time. ## Enable autoscale on existing resources
-Use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale-on-existing-database-or-container), [CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell) to enable autoscale on an existing database or container. You can switch between autoscale and standard (manual) provisioned throughput at any time. See this [documentation](autoscale-faq.yml#how-does-the-migration-between-autoscale-and-standard--manual--provisioned-throughput-work-) for more information.
+Use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale-on-existing-database-or-container), [CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell) to enable autoscale on an existing database or container. You can switch between autoscale and standard (manual) provisioned throughput at any time. For more information, refer this [documentation](autoscale-faq.yml#how-does-the-migration-between-autoscale-and-standard--manual--provisioned-throughput-work-) for more information.
## <a id="autoscale-limits"></a> Throughput and storage limits for autoscale For any value of `Tmax`, the database or container can store a total of `0.1 * Tmax GB`. After this amount of storage is reached, the maximum RU/s will be automatically increased based on the new storage value, with no impact to your application.
-For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 5000 GB of data. If you exceed 5000 GB - e.g. storage is now 6000 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
+For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 5000 GB of data. If you exceed 5000 GB - for example, storage is now 6000 GB, the new maximum RU/s will become 60,000 RU/s (scales between 6000 - 60,000 RU/s).
-When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 100 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-maximum-ru-s-on-a-database-or-container--) for more information.
+When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 100 GB of storage. For more information, refer this [documentation](autoscale-faq.yml#can-i-change-the-maximum-ru-s-on-a-database-or-container--).
## Comparison ΓÇô containers configured with manual vs autoscale throughput For more detail, see this [documentation](how-to-choose-offer.md) on how to choose between standard (manual) and autoscale throughput.
For more detail, see this [documentation](how-to-choose-offer.md) on how to choo
|| Containers with standard (manual) throughput | Containers with autoscale throughput | |||| | **Provisioned throughput (RU/s)** | Manually provisioned. | Automatically and instantaneously scaled based on the workload usage patterns. |
-| **Rate-limiting of requests/operations (429)** | May happen, if consumption exceeds provisioned capacity. | Will not happen if you consume RU/s within the autoscale throughput range that you've set. |
+| **Rate-limiting of requests/operations (429)** | May happen, if consumption exceeds provisioned capacity. | Won't happen if you consume RU/s within the autoscale throughput range that you've configured. |
| **Capacity planning** | You have to do capacity planning and provision the exact throughput you need. | The system automatically takes care of capacity planning and capacity management. |
-| **Pricing** | You pay for the manually provisioned RU/s per hour, using the [standard (manual) RU/s per hour rate](https://azure.microsoft.com/pricing/details/cosmos-db/). | You pay per hour for the highest RU/s the system scaled up to within the hour. <br/><br/> For single write region accounts, you pay for the RU/s used on an hourly basis, using the [autoscale RU/s per hour rate](https://azure.microsoft.com/pricing/details/cosmos-db/). <br/><br/>For accounts with multiple write regions, there is no extra charge for autoscale. You pay for the throughput used on hourly basis using the same [multi-region write RU/s per hour rate](https://azure.microsoft.com/pricing/details/cosmos-db/). |
+| **Pricing** | You pay for the manually provisioned RU/s per hour, using the [standard (manual) RU/s per hour rate](https://azure.microsoft.com/pricing/details/cosmos-db/). | You pay per hour for the highest RU/s the system scaled up to within the hour. <br/><br/> For single write region accounts, you pay for the RU/s used on an hourly basis, using the [autoscale RU/s per hour rate](https://azure.microsoft.com/pricing/details/cosmos-db/). <br/><br/>For accounts with multiple write regions, there's no extra charge for autoscale. You pay for the throughput used on hourly basis using the same [multi-region write RU/s per hour rate](https://azure.microsoft.com/pricing/details/cosmos-db/). |
| **Best suited for workload types** | Predictable and stable workloads| Unpredictable and variable workloads | ## Next steps
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
Last updated 10/07/2020 -+ # Throughput (RU/s) operations with PowerShell for a keyspace or table for Azure Cosmos DB - API for Cassandra
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
Last updated 10/07/2020 -+ # Throughput (RU/s) operations with PowerShell for a database or graph for Azure Cosmos DB - API for Gremlin
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
Last updated 10/07/2020 -+ # Throughput (RU/s) operations with PowerShell for a database or collection for Azure Cosmos DB for MongoDB
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/throughput.md
Last updated 10/07/2020 -+ # Throughput (RU/s) operations with PowerShell for a database or container for Azure Cosmos DB for NoSQL
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
Last updated 07/30/2020 -+ # Create a table with autoscale for Azure Cosmos DB - API for Table
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
Last updated 05/13/2020 -+ # Create a table for Azure Cosmos DB - API for Table
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
Last updated 10/07/2020 -+ # Throughput (RU/s) operations with PowerShell for a table for Azure Cosmos DB - API for Table
cosmos-db Visualize Qlik Sense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/visualize-qlik-sense.md
Before following the instructions in this article, ensure that you have the foll
* Create an Azure Cosmos DB API for NoSQL account by using the steps described in [create an account](create-sql-api-dotnet.md#create-account) section of the quickstart article.
-* [Create a database and a collection](nosql/quickstart-java.md#add-a-container) ΓÇô You can use set the collection throughput value to 1000 RU/s.
+* [Create a database and a collection](nosql/quickstart-java.md) ΓÇô You can use set the collection throughput value to 1000 RU/s.
* Load the sample video game sales data to your Azure Cosmos DB account.
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
In the Azure portal, you can change your default payment method to a new credit
- For a Microsoft Online Service Program (pay-as-you-go) account, you must be an [Account Administrator](add-change-subscription-administrator.md#whoisaa). - For a Microsoft Customer Agreement, you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes. - The supported payment methods for Microsoft Azure are credit cards, debit cards, and wire transfer. To get approved to pay by wire transfer, see [Pay for your Azure subscription wire transfer](pay-by-invoice.md). >[!NOTE]
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 10/17/2023 Last updated : 01/08/2024
This article applies to you if you are:
If you signed up for Azure through a Microsoft representative, then your default payment method is already set to *wire transfer*, so these steps aren't needed. - When you switch to pay by wire transfer, you must pay your bill within 30 days of the invoice date by wire transfer. Users with a Microsoft Customer Agreement must always [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to Azure support to enable pay by wire transfer.
Customers who have a Microsoft Online Services Program (pay-as-you-go) account c
## Request to pay by wire transfer > [!NOTE]
-> Currently only customers in the United States can get automatically approved to change their payment method to wire transfer. Support for other regions is being evaluated. If you are not in the United States, you must [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to change your payment method.
+> Currently only customers in the United States can get automatically approved to change their payment method to wire transfer and use the following procedure. Support for other regions is being evaluated. If you are not in the United States, you must [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to change your payment method.
1. Sign in to the Azure portal.
-1. Navigate to **Subscriptions** and then select the one that you want to set up wire transfer for.
+ - If you have a pay-as-you-go subscription, navigate to **Subscriptions** and then select the one that you want to set up wire transfer for.
+ - If you have a Microsoft Customer Agreement, navigate to **Cost Management + Billing** and then select **Billing profiles**. Select the billing profile that you want to set up wire transfer for.
1. In the left menu, select **Payment methods**. 1. On the Payment methods page, select **Pay by wire transfer**. 1. On the **Pay by wire transfer** page, you see a message stating that you can request to use wire transfer instead of automatic payment using a credit or debit card. Select **Continue** to start the check. 1. Depending on your approval status:
- - If you're automatically approved, the page shows a message stating that you've been approved to pay by wire transfer. Enter your **Company name** and then select **Save**.
+ - If you're automatically approved, the page shows a message stating that you're approved to pay by wire transfer. Enter your **Company name** and then select **Save**.
- If the request couldn't be processed or if you're not approved, you need to follow the steps in the next section [Submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer).
-1. If you've been approved, on the Payment methods page under **Other payment methods**, to the right of **Wire transfer**, select the ellipsis (**...**) symbol and then select **Make default**.
+1. If you're approved, on the Payment methods page under **Other payment methods**, to the right of **Wire transfer**, select the ellipsis (**...**) symbol and then select **Make default**.
You're all set to pay by wire transfer. ## Submit a request to set up pay by wire transfer
If you're not automatically approved, you can submit a request to Azure support
1. You should see the overview page. If you don't see Properties in the left menu, at the top of the page under Scope, select **Go to billing account**. 1. In the left menu, select **Properties**. On the properties page, you should see your billing account ID shown as a GUID ID value. It's your Commerce Account ID.
-If we need to run a credit check because of the amount of credit that you need, you're sent a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request.
+If we need to run a credit check because of the amount of credit that you need, you're sent a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. We might ask for a security deposit or a standby letter of credit to approve your credit check request. We ask for them when:
+
+ - No financial information is provided.
+ - The information isn't strong enough to support the amount of credit limit required.
## Switch to pay by wire transfer after approval
-If you have a Microsoft Online Services Program (pay-as-you-go) account and you've been approved to pay by wire transfer, you can switch your payment method in the Azure portal.
+If you have a Microsoft Online Services Program (pay-as-you-go) account and you're approved to pay by wire transfer, you can switch your payment method in the Azure portal.
With a Microsoft Customer Agreement, you can switch your billing profile to wire transfer.
When your account is approved for wire transfer payment, the instructions for pa
## Frequently asked questions
-*Why have I received a request for a legal document?*
+*Why did I receive a request for a legal document?*
Occasionally Microsoft needs legal documentation if the information you provided is incomplete or not verifiable. Examples might include:
cost-management-billing Resolve Past Due Balance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/resolve-past-due-balance.md
tags: billing
Previously updated : 03/13/2023 Last updated : 01/08/2024
If you have a Microsoft Customer Agreement billing account, see [Pay Microsoft C
You get an email and see an alert in the Azure portal when your payment isn't received or if we can't process your payment. Both inform you that your subscription is past due. The email contains a link that takes you to the Settle balance page. - If your default payment method is credit card, the [Account Administrator](add-change-subscription-administrator.md#whoisaa) can settle the outstanding charges in the Azure portal. If you pay by invoice (wire transfer), send your payment to the location listed at the bottom of your invoice. > [!IMPORTANT]
cost-management-billing Withholding Tax Credit India https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/withholding-tax-credit-india.md
tags: billing
Previously updated : 03/13/2023 Last updated : 01/08/2024
Your WHT request must include the following items:
Submit the WHT request by opening a ticket with Microsoft support. - ## Credit card payment If your payment method is a credit card and you made a full payment to MRS, and paid WHT to the Income Tax Department, you must submit a WHT request to claim the refund of the tax amount.
cost-management-billing Mpa Invoice Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mpa-invoice-terms.md
tags: billing
Previously updated : 03/13/2023 Last updated : 01/08/2024
The **Billing details by product** section lists the total charges for each prod
At the bottom of the invoice, there are instructions for paying your bill. You can pay by wire within 60 days of your invoice date. - ## Publisher information If you have third-party services in your bill, the name and address of each publisher is listed at the bottom of your invoice.
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 07/21/2023 Last updated : 01/08/2024
There are two ways to pay for your bill for Azure. You can pay with the default
If you signed up for Azure through a Microsoft representative, then your default payment method is always set to *wire transfer*. Automatic credit card payment isn't an option if you signed up for Azure through a Microsoft representative. Instead, you can [pay with a credit card for individual invoices](#pay-now-in-the-azure-portal). - If you have a Microsoft Online Services Program account, your default payment method is credit card. Payments are normally automatically deducted from your credit card, but you can also make one-time payments manually by credit card. If you have Azure credits, they automatically apply to your invoice each billing period.
defender-for-cloud Transition To Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/transition-to-defender-vulnerability-management.md
Title: Transition to Microsoft Defender Vulnerability Management description: Learn how to transition to Microsoft Defender Vulnerability Management in Microsoft Defender for Cloud. Previously updated : 11/23/2023 Last updated : 01/08/2024 # Transition to Microsoft Defender Vulnerability Management
If your organization is ready to transition to container vulnerability assessmen
| [Azure registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)-Preview](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 | | [Azure running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
-### Disable using the Qualys recommendations
+### Disable using the Qualys recommendations for Azure commercial clouds
-To disable the above Qualys recommendations using the Defender for Cloud UI:
+To disable the above Qualys recommendations for Azure commercial clouds using the Defender for Cloud UI:
1. In the Azure portal, navigate to Defender for Cloud and open the **Recommendations** page.
To disable the above Qualys recommendations using the Defender for Cloud UI:
1. Fill out the remaining details and select create. Wait up to 30 minutes for the exemptions to take effect.
+### Disable using the Qualys recommendations for national clouds
+
+To disable the above Qualys recommendations for national clouds (Azure Government and Azure operated by 21Vianet) using the Defender for Cloud UI:
+
+1. Go to **Environment settings** and select the relevant subscription you want to disable the recommendation on.
+
+ :::image type="content" source="media/transition-to-defender-vulnerability-management/environment-settings.png" alt-text="Screenshot showing how to select subscription in environment settings." lightbox="media/transition-to-defender-vulnerability-management/environment-settings.png":::
+
+1. In the **Settings** pane, go to **Security policy**, and select the initiative assignment.
+
+ :::image type="content" source="media/transition-to-defender-vulnerability-management/security-policy.png" alt-text="Screenshot of security policy settings." lightbox="media/transition-to-defender-vulnerability-management/security-policy.png":::
+
+1. Search for the Qualys recommendation and select **Manage effect and parameters**.
+
+ :::image type="content" source="media/transition-to-defender-vulnerability-management/qualys-recommendation.png" alt-text="Screenshot of Qualys recommendation." lightbox="media/transition-to-defender-vulnerability-management/qualys-recommendation.png":::
+
+1. Change to **Disabled**.
+
+ :::image type="content" source="media/transition-to-defender-vulnerability-management/select-disabled.png" alt-text="Screenshot of disable button." lightbox="media/transition-to-defender-vulnerability-management/select-disabled.png":::
+ ## Step 3: (optional) Update REST API and Azure Resource Graph queries If you're currently accessing container vulnerability assessment results powered by Qualys programmatically, either via the Azure Resource Graph (ARG) Rest API or Subassessment REST API or ARG queries, you need to update your existing queries to match the new schema and/or REST API provided by the new container vulnerability assessment powered by Microsoft Defender Vulnerability Management.
dev-box How To Create Dev Boxes Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md
Previously updated : 09/11/2023 Last updated : 01/03/2024
As a dev box developer, you can:
You can create as many dev boxes as you need through the Microsoft Dev Box developer portal. You might create a separate dev box for different scenarios, for example: -- **Dev box per workload**: you could create a dev box for your front-end work and a separate dev box for your back-end work. You could also create multiple dev boxes for your back end.-- **Dev box for bug fixing**: you could use a separate dev box for the bug fix to work on the specific task and troubleshoot the issue without impacting your primary machine.
+- **Dev box per workload**. Create a dev box for your front-end work and a separate dev box for your back-end work. You can also create multiple dev boxes for your back-end system.
+- **Dev box for bug fixing**. Use a separate dev box for the bug fix to work on the specific task and troubleshoot the issue without impacting your primary machine.
-You can create a dev box by using:
+You can create a dev box by using the Microsoft Dev Box developer portal. For more information, see [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md).
-- Developer portal. For more information, see [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md)-- Azure CLI dev center extension. For more information, see [Configure Microsoft Dev Box from the command-line with the Azure CLI](how-to-install-dev-box-cli.md)
+You can also create a dev box through the Azure CLI dev center extension. For more information, see [Configure Microsoft Dev Box from the command-line with the Azure CLI](how-to-install-dev-box-cli.md).
## Connect to a dev box
-After you create your dev box, you can connect to it in two ways:
+After you create your dev box, you can connect to it through a remote application or via the browser.
-- **Remote desktop client application**: remote desktop provides the highest performance and best user experience for heavy workloads. Remote Desktop also supports multi-monitor configuration. For more information, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md).
+A **Remote Desktop client application** provides the highest performance and best user experience for heavy workloads. Remote Desktop also supports multi-monitor configuration. For more information, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md).
-- **Browser**: use the browser for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box).
+You can use the **browser** for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box).
-## Shutdown, restart or start a dev box
+## Shut down, restart, or start a dev box
-You can perform many actions on a dev box in the Microsoft Dev Box developer portal by using the actions menu on the dev box tile. The available options depend on the state of the dev box and the configuration of the dev box pool it belongs to. For example, you can shut down or restart a running dev box, or start a stopped dev box.
+You can perform many actions on a dev box in the Microsoft Dev Box developer portal by using the actions menu (**...**) on the dev box tile. The available options depend on the state of the dev box and the configuration of the dev box pool it belongs to. For example, you can shut down or restart a running dev box, or start a stopped dev box.
-To shut down or restart a dev box.
+To shut down or restart a dev box:
1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
-1. On the dev box you want to shut down or restart, select actions.
+
+1. On the dev box you that want to shut down or restart, select the actions menu (**...**).
- :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-actions-shutdown.png" alt-text="Screenshot of developer portal showing the dev box tile for a running dev box with the actions menu highlighted.":::
+ :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-actions-shutdown.png" alt-text="Screenshot of the developer portal showing the actions menu for a running dev box." border="false":::
-1. For a dev box that is running, you can select **Shut down** or **Restart**.
+1. For a running dev box, you can select **Shut down** or **Restart**.
To start a dev box: 1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
-1. On the dev box you want to start, select actions.
+
+1. On the dev box you that want to start, select the actions menu (**...**).
- :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-actions-start.png" alt-text="Screenshot of developer portal showing the dev box tile for a stopped dev box with the actions menu highlighted.":::
+ :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-actions-start.png" alt-text="Screenshot of the developer portal showing the actions menu for a stopped dev box." border="false":::
-1. For a dev box that is stopped, you can select **Start**.
+1. For a stopped dev box, you can select **Start**.
## Get information about a dev box
You can use the Microsoft Dev Box developer portal to view information about a d
To get more information about your dev box: 1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
-1. On the dev box you want to view, select actions > **More info**.
+
+1. On the dev box that you want to view, select the actions menu (**...**) and then select **More Info**.
- :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-actions-more-info.png" alt-text="Screenshot of developer portal showing a dev box tile with the actions menu selected, and more info highlighted.":::
+ :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-actions-more-info.png" alt-text="Screenshot of the developer portal showing the actions menu for a dev box and More Info selected." border="false":::
1. In the dev box details pane, you see more information about your dev box, like the following example:
- :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-details-pane.png" alt-text="Screenshot of the dev box more information pane, showing creation date, dev center, dev box pool, and source image for the dev box.":::
+ :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-details-pane.png" alt-text="Screenshot of the dev box more information pane, showing creation date, dev center, dev box pool, and source image for the dev box." border="false":::
## Delete a dev box When you no longer need a dev box, you can delete it in the developer portal.
-There are many reasons why you might not need a dev box anymore. Maybe you finished testing, or you finished working on a specific project within your product.
+There are many reasons why you might not need a dev box anymore. Maybe you completed your testing, or you finished working on a specific project within your product.
-You can delete dev boxes after you finish your tasks. Say you finished fixing your bug and merged your pull request. Now, you can delete your dev box and create new dev boxes to work on new items.
+You can delete dev boxes after you finish your tasks. Suppose you finish fixing your bug and merge your pull request. Now, you can delete your dev box and create new dev boxes to work on new items.
-> [!NOTE]
-> Ensure that neither you nor your team members need the dev box before deleting. You can't retrieve dev boxes after deletion.
+> [!IMPORTANT]
+> You can't retrieve a dev box after it's deleted. Before you delete a dev box, confirm that neither you nor your team members need the dev box for future tasks.
1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
-1. For the dev box that you want to delete, select actions > **Delete**.
+1. For the dev box that you want to delete, select the actions menu (**...**) and then select **Delete**.
- :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-delete.png" alt-text="Screenshot of the dev box actions menu with the Delete option.":::
+ :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-delete.png" alt-text="Screenshot of the developer portal showing the actions menu for a dev box and Delete selected." border="false":::
1. To confirm the deletion, select **Delete**.
- :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-confirm-delete.png" alt-text="Screenshot of the confirmation message about deleting a dev box.":::
+ :::image type="content" source="media/how-to-create-dev-boxes-developer-portal/dev-box-confirm-delete.png" alt-text="Screenshot of the confirmation message after you select to delete a dev box." border="false":::
## Related content
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
Previously updated : 04/25/2023 Last updated : 01/02/2024
To reduce the complexity of creating VM images, VM Image Builder:
- Removes the need to use complex tooling, processes, and manual steps to create a VM image. VM Image Builder abstracts out all these details and hides Azure-specific requirements, such as the need to generalize the image (Sysprep). And it gives more advanced users the ability to override such requirements. -- Can be integrated with existing image build pipelines for a click-and-go experience. To do so, you can either call VM Image Builder from your pipeline or use an Azure VM Image Builder service DevOps task.
+- Works with existing image build pipelines for a click-and-go experience. You can call VM Image Builder from your pipeline or use an Azure VM Image Builder service DevOps task.
-- Can fetch customization data from various sources, which removes the need to collect them all from one place.
+- Fetches customization data from various sources, which removes the need to collect them all from one place.
-- Can be integrated with Azure Compute Gallery, which creates an image management system for distributing, replicating, versioning, and scaling images globally. Additionally, you can distribute the same resulting image as a virtual hard disk or as one or more managed images, without having to rebuild them from scratch.
+- Integrates with Azure Compute Gallery, which creates an image management system for distributing, replicating, versioning, and scaling images globally. Additionally, you can distribute the same resulting image as a virtual hard disk or as one or more managed images, without having to rebuild them from scratch.
> [!IMPORTANT] > Microsoft Dev Box supports only images that use the security type [Trusted Launch](/azure/virtual-machines/trusted-launch-portal?tabs=portal%2Cportal2) enabled.
To reduce the complexity of creating VM images, VM Image Builder:
To provision a custom image that you created by using VM Image Builder, you need:
+- Azure PowerShell 6.0 or later. If you don't have PowerShell installed, follow the steps in [Install Azure PowerShell on Windows](/powershell/azure/install-azps-windows).
- Owner or Contributor permissions on an Azure subscription or on a specific resource group. - A resource group.-- A dev center with an attached network connection. If you don't have a one, follow the steps in [Connect dev boxes to resources by configuring network connections](how-to-configure-network-connections.md).
+- A dev center with an attached network connection. If you don't have one, follow the steps in [Connect dev boxes to resources by configuring network connections](how-to-configure-network-connections.md).
## Create a Windows image and distribute it to Azure Compute Gallery
-The first step is to use Azure VM Image Builder and Azure PowerShell to create an image version in Azure Compute Gallery and then distribute the image globally. You can also do this by using the Azure CLI.
+The first step is to use Azure VM Image Builder and Azure PowerShell to create an image version in Azure Compute Gallery and then distribute the image globally. You can also do this task by using the Azure CLI.
1. To use VM Image Builder, you need to register the features.
- Check your provider registrations. Make sure that each one returns `Registered`.
+ Check your provider registrations. Make sure each command returns `Registered` for the specified feature.
- ```powershell
- Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages | Format-table -Property ResourceTypes,RegistrationState
- Get-AzResourceProvider -ProviderNamespace Microsoft.Storage | Format-table -Property ResourceTypes,RegistrationState
- Get-AzResourceProvider -ProviderNamespace Microsoft.Compute | Format-table -Property ResourceTypes,RegistrationState
- Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault | Format-table -Property ResourceTypes,RegistrationState
- Get-AzResourceProvider -ProviderNamespace Microsoft.Network | Format-table -Property ResourceTypes,RegistrationState
- ```
+ ```powershell
+ Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Storage | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Compute | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Network | Format-table -Property ResourceTypes,RegistrationState
+ ```
- If the provider registrations don't return `Registered`, register the providers by running the following commands:
+ If the provider registrations don't return `Registered`, register the providers by running the following commands:
- ```powershell
- Register-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
- Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
- Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
- Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
- Register-AzResourceProvider -ProviderNamespace Microsoft.Network
- ```
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
+ Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+ ```
-2. Install PowerShell modules:
+1. Install PowerShell modules:
- ```powershell
- 'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
- ```
+ ```powershell
+ 'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
+ ```
-3. Create variables to store information that you use more than once.
+1. Create variables to store information that you use more than once.
- Copy the following sample code. Replace `<Resource group>` with the resource group that you used to create the dev center.
+ 1. Copy the following sample code.
+ 1. Replace `<Resource group>` with the resource group that you used to create the dev center.
+ 1. Run the updated code in PowerShell.
- ```powershell
- # Get existing context
- $currentAzContext = Get-AzContext
- # Get your current subscription ID
- $subscriptionID=$currentAzContext.Subscription.Id
- # Destination image resource group
- $imageResourceGroup="<Resource group>"
- # Location
- $location="eastus2"
- # Image distribution metadata reference name
- $runOutputName="aibCustWinManImg01"
- # Image template name
- $imageTemplateName="vscodeWinTemplate"
- ```
+ ```powershell
+ # Get existing context
+ $currentAzContext = Get-AzContext
-4. Create a user-assigned identity and set permissions on the resource group.
+ # Get your current subscription ID
+ $subscriptionID=$currentAzContext.Subscription.Id
- VM Image Builder uses the provided user identity to inject the image into Azure Compute Gallery. The following example creates an Azure role definition with specific actions for distributing the image. The role definition is then assigned to the user identity.
+ # Destination image resource group
+ $imageResourceGroup="<Resource group>"
- ```powershell
- # Set up role def names, which need to be unique
- $timeInt=$(get-date -UFormat "%s")
- $imageRoleDefName="Azure Image Builder Image Def"+$timeInt
- $identityName="aibIdentity"+$timeInt
-
- ## Add an Azure PowerShell module to support AzUserAssignedIdentity
- Install-Module -Name Az.ManagedServiceIdentity
-
- # Create an identity
- New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName -Location $location
-
- $identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
- $identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
- ```
+ # Location
+ $location="eastus2"
-5. Assign permissions for the identity to distribute the images.
+ # Image distribution metadata reference name
+ $runOutputName="aibCustWinManImg01"
- Use this command to download an Azure role definition template, and then update it with the previously specified parameters:
+ # Image template name
+ $imageTemplateName="vscodeWinTemplate"
+ ```
- ```powershell
- $aibRoleImageCreationUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
- $aibRoleImageCreationPath = "aibRoleImageCreation.json"
+1. Create a user-assigned identity and set permissions on the resource group by running the following code in PowerShell.
+
+ VM Image Builder uses the provided user identity to inject the image into Azure Compute Gallery. The following example creates an Azure role definition with specific actions for distributing the image. The role definition is then assigned to the user identity.
+
+ ```powershell
+ # Set up role definition names, which need to be unique
+ $timeInt=$(get-date -UFormat "%s")
+ $imageRoleDefName="Azure Image Builder Image Def"+$timeInt
+ $identityName="aibIdentity"+$timeInt
- # Download the configuration
- Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
- ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
- ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
- ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
+ # Add an Azure PowerShell module to support AzUserAssignedIdentity
+ Install-Module -Name Az.ManagedServiceIdentity
- # Create a role definition
- New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
- # Grant the role definition to the VM Image Builder service principal
- New-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
- ```
+ # Create an identity
+ New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName -Location $location
+
+ $identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
+ $identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
+ ```
+
+1. Assign permissions for the identity to distribute the images.
+
+ Use this command to download an Azure role definition template, and then update it with the previously specified parameters:
+
+ ```powershell
+ $aibRoleImageCreationUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
+ $aibRoleImageCreationPath = "aibRoleImageCreation.json"
+
+ # Download the configuration
+ Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
+
+ # Create a role definition
+ New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
+
+ # Grant the role definition to the VM Image Builder service principal
+ New-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+ ```
## Create a gallery
-To use VM Image Builder with Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you. The following code creates a definition that has trusted launch as the security type and meets the Windows 365 image requirements.
+To use VM Image Builder with Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you.
-```powershell
-# Gallery name
-$galleryName= "devboxGallery"
+1. Run the following commands to create a new gallery and image definition.
-# Image definition name
-$imageDefName ="vscodeImageDef"
+ This code creates a definition with the _trusted launch_ security type and meets the Windows 365 image requirements.
-# Additional replication region
-$replRegion2="eastus"
+ ```powershell
+ # Gallery name
+ $galleryName= "devboxGallery"
-# Create the gallery
-New-AzGallery -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location
+ # Image definition name
+ $imageDefName ="vscodeImageDef"
-$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
-$features = @($SecurityType)
+ # Additional replication region
+ $replRegion2="eastus"
-# Create the image definition
-New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2"
-```
+ # Create the gallery
+ New-AzGallery -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location
-1. Copy the following Azure Resource Manger template for VM Image Builder. This template indicates the source image and the customizations applied. This template installs Choco and VS Code. It also indicates where the image is distributed.
+ $SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
+ $features = @($SecurityType)
- ```json
- {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "imageTemplateName": {
- "type": "string"
- },
- "api-version": {
- "type": "string"
- },
- "svclocation": {
- "type": "string"
- }
- },
- "variables": {},
- "resources": [
- {
- "name": "[parameters('imageTemplateName')]",
- "type": "Microsoft.VirtualMachineImages/imageTemplates",
- "apiVersion": "[parameters('api-version')]",
- "location": "[parameters('svclocation')]",
- "dependsOn": [],
- "tags": {
- "imagebuilderTemplate": "win11multi",
- "userIdentity": "enabled"
- },
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<imgBuilderId>": {}
- }
- },
- "properties": {
- "buildTimeoutInMinutes": 100,
- "vmProfile": {
- "vmSize": "Standard_DS2_v2",
- "osDiskSizeGB": 127
- },
- "source": {
- "type": "PlatformImage",
- "publisher": "MicrosoftWindowsDesktop",
- "offer": "Windows-11",
- "sku": "win11-21h2-avd",
- "version": "latest"
- },
- "customize": [
- {
- "type": "PowerShell",
- "name": "Install Choco and Vscode",
- "inline": [
- "Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))",
- "choco install -y vscode"
- ]
- }
- ],
- "distribute":
- [
- {
- "type": "SharedImage",
- "galleryImageId": "/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>",
- "runOutputName": "<runOutputName>",
- "artifactTags": {
- "source": "azureVmImageBuilder",
- "baseosimg": "win11multi"
- },
- "replicationRegions": [
- "<region1>",
- "<region2>"
- ]
- }
- ]
- }
- }
- ]
- }
- ```
-
-2. Configure the template with your variables:
+ # Create the image definition
+ New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2"
+ ```
- ```powershell
- $templateFilePath = <Template Path>
-
- (Get-Content -path $templateFilePath -Raw ) -replace '<subscriptionID>',$subscriptionID | Set-Content -Path $templateFilePath
- (Get-Content -path $templateFilePath -Raw ) -replace '<rgName>',$imageResourceGroup | Set-Content -Path $templateFilePath
- (Get-Content -path $templateFilePath -Raw ) -replace '<runOutputName>',$runOutputName | Set-Content -Path $templateFilePath
- (Get-Content -path $templateFilePath -Raw ) -replace '<imageDefName>',$imageDefName | Set-Content -Path $templateFilePath
- (Get-Content -path $templateFilePath -Raw ) -replace '<sharedImageGalName>',$galleryName| Set-Content -Path $templateFilePath
- (Get-Content -path $templateFilePath -Raw ) -replace '<region1>',$location | Set-Content -Path $templateFilePath
- (Get-Content -path $templateFilePath -Raw ) -replace '<region2>',$replRegion2 | Set-Content -Path $templateFilePath
- ((Get-Content -path $templateFilePath -Raw) -replace '<imgBuilderId>',$identityNameResourceId) | Set-Content -Path $templateFilePath
- ```
+1. Create a file to store your template definition, such as c:/temp/mytemplate.txt.
-3. Create the image version.
+1. Copy the following Azure Resource Manger template for VM Image Builder into your new template file.
- Your template must be submitted to the service. The following commands download any dependent artifacts, such as scripts, and store them in the staging resource group. The staging resource group is prefixed with `IT_`.
+ This template indicates the source image and the customizations applied. It installs Choco and VS Code, and also indicates the image distribution location.
- ```powershell
- New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -Api-Version "2020-02-14" -imageTemplateName $imageTemplateName -svclocation $location
- ```
-
- To build the image, invoke `Run` on the template:
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "imageTemplateName": {
+ "type": "string"
+ },
+ "api-version": {
+ "type": "string"
+ },
+ "svclocation": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('imageTemplateName')]",
+ "type": "Microsoft.VirtualMachineImages/imageTemplates",
+ "apiVersion": "[parameters('api-version')]",
+ "location": "[parameters('svclocation')]",
+ "dependsOn": [],
+ "tags": {
+ "imagebuilderTemplate": "win11multi",
+ "userIdentity": "enabled"
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<imgBuilderId>": {}
+ }
+ },
+ "properties": {
+ "buildTimeoutInMinutes": 100,
+ "vmProfile": {
+ "vmSize": "Standard_DS2_v2",
+ "osDiskSizeGB": 127
+ },
+ "source": {
+ "type": "PlatformImage",
+ "publisher": "MicrosoftWindowsDesktop",
+ "offer": "Windows-11",
+ "sku": "win11-21h2-avd",
+ "version": "latest"
+ },
+ "customize": [
+ {
+ "type": "PowerShell",
+ "name": "Install Choco and Vscode",
+ "inline": [
+ "Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))",
+ "choco install -y vscode"
+ ]
+ }
+ ],
+ "distribute":
+ [
+ {
+ "type": "SharedImage",
+ "galleryImageId": "/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>",
+ "runOutputName": "<runOutputName>",
+ "artifactTags": {
+ "source": "azureVmImageBuilder",
+ "baseosimg": "win11multi"
+ },
+ "replicationRegions": [
+ "<region1>",
+ "<region2>"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ Close your template file before proceeding to the next step.
+
+1. Configure your new template with your variables.
+
+ Replace `<Template Path>` with the location of your template file, such as `c:/temp/mytemplate`.
+
+ ```powershell
+ $templateFilePath = <Template Path>
+
+ (Get-Content -path $templateFilePath -Raw ) -replace '<subscriptionID>',$subscriptionID | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<rgName>',$imageResourceGroup | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<runOutputName>',$runOutputName | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<imageDefName>',$imageDefName | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<sharedImageGalName>',$galleryName| Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<region1>',$location | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<region2>',$replRegion2 | Set-Content -Path $templateFilePath
+ ((Get-Content -path $templateFilePath -Raw) -replace '<imgBuilderId>',$identityNameResourceId) | Set-Content -Path $templateFilePath
+ ```
+
+1. Submit your template to the service.
+
+ The following command downloads any dependent artifacts, such as scripts, and store them in the staging resource group. The staging resource group is prefixed with `IT_`.
+
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -Api-Version "2020-02-14" -imageTemplateName $imageTemplateName -svclocation $location
+ ```
+
+1. Build the image by invoking the `Run` command on the template:
+
+ At the prompt to confirm the run process, enter **Yes**.
+
+ ```powershell
+ Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14" -Action Run
+ ```
+
+ > [!IMPORTANT]
+ > Creating the image and replicating it to both regions can take some time. You might see a difference in progress reporting between PowerShell and the Azure portal. Before you begin creating a dev box definition, wait until the process completes.
+
+1. Get information about the newly built image, including the run status and provisioning state.
```powershell
- Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14" -Action Run
-
+ Get-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup | Select-Object -Property Name, LastRunStatusRunState, LastRunStatusMessage, ProvisioningState
```
- Creating the image and replicating it to both regions can take a few moments. Before you begin creating a dev box definition, wait until this part is finished.
+ Sample output:
```powershell
- Get-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup | Select-Object -Property Name, LastRunStatusRunState, LastRunStatusMessage, ProvisioningState
+ Name LastRunStatusRunState LastRunStatusMessage ProvisioningState
+
+ vscodeWinTemplate Creating
```
-Alternatively, you can view the provisioning state of your image in the Azure portal by going to your gallery and then the image definition.
+ You can also view the provisioning state of your image in the Azure portal. Go to your gallery and view the image definition.
+ :::image type="content" source="media/how-to-customize-devbox-azure-image-builder/image-version-provisioning-state.png" alt-text="Screenshot that shows the provisioning state of the customized image version." lightbox="media/how-to-customize-devbox-azure-image-builder/image-version-provisioning-state.png":::
## Configure the gallery
-After your custom image has been provisioned in the gallery, you can configure the gallery to use the images in the dev center. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).
+After your custom image is provisioned in the gallery, you can configure the gallery to use the images in the dev center. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).
## Set up Microsoft Dev Box with a custom image
After the gallery images are available in the dev center, you can use the custom
## Related content -- [2. Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
Previously updated : 04/25/2023 Last updated : 01/05/2024 #Customer intent: As a platform engineer, I want to be able to manage dev box definitions so that I can provide appropriate dev boxes to my users.
Depending on their task, development teams have different software, configuratio
To manage a dev box definition, you need the following permissions:
-|Action|Permissions required|
-|--|--|
-|Create, delete, or update a dev box definition|Owner, Contributor, or Write permissions on the dev center in which you want to create the dev box definition. |
+| Action | Permissions required |
+|||
+| _Create, delete, or update a dev box definition_ | Owner, Contributor, or Write permissions on the dev center in which you want to create the dev box definition. |
## Sources of images When you create a dev box definition, you need to select a virtual machine image. Microsoft Dev Box supports the following types of images: -- Preconfigured images from the Azure Marketplace
+- Preconfigured images from Azure Marketplace
- Custom images stored in an Azure compute gallery ### Azure Marketplace
To use the custom image while creating a dev box definition, attach the compute
When you select an image to use in your dev box definition, you must specify which version of the image you want to use: - **Numbered image versions**: If you want a consistent dev box definition in which the base image doesn't change, use a specific, numbered version of the image. Using a numbered version ensures that all the dev boxes in the pool always use the same version of the image.-- **Latest image versions**: If you want a flexible dev box definition in which you can update the base image as needs change, use the latest version of the image. This choice ensures that new dev boxes use the most recent version of the image. Existing dev boxes aren't modified when an image version is updated.
+- **Latest image versions**: If you want a flexible dev box definition in which you can update the base image as requirements change, use the latest version of the image. This choice ensures that new dev boxes use the most recent version of the image. Existing dev boxes aren't modified when an image version is updated.
## Create a dev box definition
The following steps show you how to create a dev box definition by using an exis
1. In the search box, enter **dev center**. In the list of results, select **Dev centers**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot that shows a search for dev centers from the Azure portal search box.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot that shows a search for dev centers from the Azure portal search box." lightbox="./media/how-to-manage-dev-box-definitions/discover-devcenter.png":::
1. Open the dev center in which you want to create the dev box definition, and then select **Dev box definitions**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot that shows the dev center overview page and the menu item for dev box definitions.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot that shows the dev center overview page and the menu item for dev box definitions." lightbox="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png":::
1. On the **Dev box definitions** page, select **Create**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/create-dev-box-definition.png" alt-text="Screenshot of the Create button and the list of existing dev box definitions.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/create-dev-box-definition.png" alt-text="Screenshot of the Create button and the list of existing dev box definitions." lightbox="./media/how-to-manage-dev-box-definitions/create-dev-box-definition.png":::
1. On the **Create dev box definition** pane, enter the following values:
- |Name|Value|
- |-|-|
- |**Name**|Enter a descriptive name for your dev box definition. You can't change the dev box definition name after it's created. |
- |**Image**|Select the base operating system for the dev box. You can select an image from Azure Marketplace or from Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** or **Visual Studio 2022 Pro on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image.|
- |**Image version**|Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available.|
- |**Compute**|Select the compute combination for your dev box definition.|
- |**Storage**|Select the amount of storage for your dev box definition.|
+ | Setting | Value | Note |
+ ||||
+ | **Name** | Enter a descriptive name for your dev box definition. | You can't change the dev box definition name after it's created. |
+ | **Image** | Select the base operating system for the dev box. You can select an image from Azure Marketplace or from Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image or **Visual Studio 2022 Pro on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. | To access custom images when you create a dev box definition, you can use Azure Compute Gallery. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md). |
+ | **Image version** | Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available. | Selecting the **Latest** image version enables the dev box pool to use the most recent version of your chosen image from the gallery. This approach ensures the created dev boxes stay up to date with the latest tools and code for your image. Existing dev boxes aren't modified when an image version is updated. |
+ | **Compute** | Select the compute combination for your dev box definition. | |
+ | **Storage** | Select the amount of storage for your dev box definition. | |
+ | **Enable hibernation**| Leave this checkbox unselected. | |
:::image type="content" source="./media/how-to-manage-dev-box-definitions/recommended-test-image.png" alt-text="Screenshot that shows the pane for creating a dev box definition."::: 1. Select **Create**. > [!NOTE]
-> Dev box definitions with 4 core SKUs are no longer supported. You will need to update to an 8 core SKU or delete the dev box definition.
+> Dev box definitions with 4 core SKUs are no longer supported. You need to update to an 8 core SKU or delete the dev box definition.
## Update a dev box definition
-Over time, your needs for dev boxes can change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so that new dev boxes use the new configuration.
+Over time, your needs for dev boxes can change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so new dev boxes use the new configuration.
You can update the image, image version, compute, and storage settings for a dev box definition:
You can update the image, image version, compute, and storage settings for a dev
1. In the search box, enter **dev center**. In the list of results, select **Dev centers**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot that shows a search for dev centers in the Azure portal search box.":::
- 1. Open the dev center that contains the dev box definition that you want to update, and then select **Dev box definitions**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot that shows the dev center overview page and the menu option for dev box definitions.":::
-
-1. Select the dev box definitions that you want to update, and then select the edit button.
+1. Select the dev box definitions that you want to update, and then select the edit (**pencil**) button.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/update-dev-box-definition.png" alt-text="Screenshot of the list of existing dev box definitions and the edit button.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/update-dev-box-definition.png" alt-text="Screenshot of the list of existing dev box definitions and the edit button." lightbox="./media/how-to-manage-dev-box-definitions/update-dev-box-definition.png":::
1. On the page for editing a dev box definition, you can select a new image, change the image version, change the compute, or modify the available storage.
To delete a dev box definition in the Azure portal:
1. In the search box, enter **dev center**. In the list of results, select **Dev centers**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot of a search for dev centers from the Azure portal search box.":::
- 1. Open the dev center from which you want to delete the dev box definition, and then select **Dev box definitions**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot of the dev center overview page and the menu item for dev box definitions.":::
-
1. Select the dev box definition that you want to delete, and then select **Delete**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-dev-box-definition.png" alt-text="Screenshot of a selected dev box definition and the Delete button.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-dev-box-definition.png" alt-text="Screenshot of a selected dev box definition and the Delete button." lightbox="./media/how-to-manage-dev-box-definitions/delete-dev-box-definition.png":::
1. In the warning message, select **OK**.
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-warning.png" alt-text="Screenshot of the warning message about deleting a dev box definition.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-warning.png" alt-text="Screenshot of the warning message about deleting a dev box definition.":::
## Related content
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
Previously updated : 04/25/2023 Last updated : 01/05/2024 #Customer intent: As a platform engineer, I want to be able to manage dev box pools so that I can provide appropriate dev boxes to my users.
In this article, you learn how to manage a dev box pool in Microsoft Dev Box by using the Azure portal.
-A dev box pool is the collection of dev boxes that have the same settings, such as the dev box definition and network connection. A dev box pool is associated with a Microsoft Dev Box project.
+A dev box pool is a collection of dev boxes that have the same settings, such as the dev box definition and network connection. A dev box pool is associated with a Microsoft Dev Box project.
Dev box pools define the location of the dev boxes through the network connection. You can choose to deploy dev boxes to a Microsoft-hosted network or to a network that you manage. If you choose to deploy dev boxes to a network that you manage, you must first [configure a network connection](./how-to-configure-network-connections.md). Organizations that support developers in multiple geographical locations can create dev box pools for each location by specifying a nearby region.
-Developers that have access to the project in the dev center, can then create a dev box from a dev box pool.
+Developers that have access to the project in the dev center can create a dev box from a dev box pool.
## Permissions To manage a dev box pool, you need the following permissions:
-|Action|Permissions required|
-|--|--|
-|Create, delete, or update a dev box pool|Owner or Contributor permissions on an Azure subscription or a specific resource group. </br> DevCenter Project Admin permissions for the project.|
+| Action | Permissions required |
+|||
+| _Create, delete, or update a dev box pool_ | - Owner or Contributor permissions on an Azure subscription or a specific resource group. </br> - DevCenter Project Admin permissions for the project. |
## Create a dev box pool
If you don't have an available dev center with an existing dev box definition an
1. In the search box, enter **projects**. In the list of results, select **Projects**.
- :::image type="content" source="./media/how-to-manage-dev-box-pools/discover-projects.png" alt-text="Screenshot that shows a search for projects from the Azure portal search box.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/discover-projects.png" alt-text="Screenshot that shows a search for projects from the Azure portal search box." lightbox="./media/how-to-manage-dev-box-pools/discover-projects.png":::
-1. Open the project with which you want to associate the new dev box pool.
+1. Open the Dev Box project with which you want to associate the new dev box pool.
- :::image type="content" source="./media/how-to-manage-dev-box-pools/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/projects-grid.png" alt-text="Screenshot of the list of existing projects." lightbox="./media/how-to-manage-dev-box-pools/projects-grid.png":::
1. Select **Dev box pools**, and then select **Create**.
- :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-grid-empty.png" alt-text="Screenshot of the empty list of dev box pools within a project, along with the Create button.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-grid-empty.png" alt-text="Screenshot of the empty list of dev box pools within a project, along with the Create button." lightbox="./media/how-to-manage-dev-box-pools/dev-box-pool-grid-empty.png":::
1. On the **Create a dev box pool** pane, enter the following values:
- |Name|Value|
- |-|-|
- |**Name**|Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes. It must be unique within a project.|
- |**Dev box definition**|Select an existing dev box definition. The definition determines the base image and size for the dev boxes that are created in this pool.|
- |**Network connection**|1. Select **Deploy to a Microsoft hosted network**, or use an existing network connection. </br>2. Select the region where the dev boxes should be deployed. Be sure to select a region that is close to where your developers are physically located to ensure the lowest latency experience with dev box.|
- |**Dev box Creator Privileges**|Select **Local Administrator** or **Standard User**.|
- |**Enable Auto-stop**|**Yes** is the default. Select **No** to disable an auto-stop schedule. You can configure an auto-stop schedule after the pool is created.|
- |**Stop time**| Select a time to shut down all the dev boxes in the pool.|
- |**Time zone**| Select the time zone that the stop time is in.|
- |**Licensing**| Select this checkbox to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
+ | Setting | Value |
+ |||
+ | **Name** |Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes. It must be unique within a project. |
+ | **Dev box definition** | Select an existing dev box definition. The definition determines the base image and size for the dev boxes that are created in this pool. |
+ | **Network connection** | 1. Select **Deploy to a Microsoft hosted network**, or use an existing network connection. </br>2. Select the region where the dev boxes should be deployed. Be sure to select a region that is close to where your developers are physically located to ensure the lowest latency experience with dev box. |
+ | **Dev box Creator Privileges** | Select **Local Administrator** or **Standard User**. |
+ | **Enable Auto-stop** | **Yes** is the default. Select **No** to disable an auto-stop schedule. You can configure an auto-stop schedule after the pool is created. |
+ | **Stop time** | Select a time to shut down all the dev boxes in the pool. |
+ | **Time zone** | Select the time zone that the stop time is in. |
+ | **Licensing** | Select this checkbox to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
- :::image type="content" source="./media/how-to-manage-dev-box-pools/create-pool-details.png" alt-text="Screenshot of the pane for creating a dev box pool.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/create-pool-details.png" alt-text="Screenshot of the pane for creating a dev box pool." lightbox="./media/how-to-manage-dev-box-pools/create-pool-details.png":::
1. Select **Create**.
If you don't have an available dev center with an existing dev box definition an
The Azure portal deploys the dev box pool and runs health checks to ensure that the image and network pass the validation criteria for dev boxes. The following screenshot shows four dev box pools, each with a different status. ## Manage dev boxes in a pool
You can manage existing dev boxes in a dev box pool through the Azure portal. Yo
1. Select the pool that contains the dev box that you want to manage.
- :::image type="content" source="media/how-to-manage-dev-box-pools/manage-dev-box-pool.png" alt-text="Screenshot showing a list of dev box pools in Azure portal." lightbox="media/how-to-manage-dev-box-pools/manage-dev-box-pool.png":::
+ :::image type="content" source="media/how-to-manage-dev-box-pools/manage-dev-box-pool.png" alt-text="Screenshot showing a list of dev box pools in Azure portal." lightbox="media/how-to-manage-dev-box-pools/manage-dev-box-pool.png":::
-1. Scroll to the far right, and select the Dev box operations menu (**...**) for the dev box that you want to manage.
+1. Scroll to the right, and select more actions (**...**) for the dev box that you want to manage.
- :::image type="content" source="media/how-to-manage-dev-box-pools/manage-dev-box-in-azure-portal.png" alt-text="Screenshot of the Azure portal, showing dev boxes in a dev box pool." lightbox="media/how-to-manage-dev-box-pools/manage-dev-box-in-azure-portal.png":::
+ :::image type="content" source="media/how-to-manage-dev-box-pools/manage-dev-box-in-azure-portal.png" alt-text="Screenshot of the Azure portal, showing dev boxes in a dev box pool." lightbox="media/how-to-manage-dev-box-pools/manage-dev-box-in-azure-portal.png":::
1. Depending on the current state of the dev box, you can select **Start**, **Stop**, or **Delete**.
- :::image type="content" source="media/how-to-manage-dev-box-pools/dev-box-operations-menu.png" alt-text="Screenshot of the Azure portal, showing the menu for managing a dev box." lightbox="media/how-to-manage-dev-box-pools/dev-box-operations-menu.png":::
+ :::image type="content" source="media/how-to-manage-dev-box-pools/dev-box-operations-menu.png" alt-text="Screenshot of the Azure portal, showing the menu for managing a dev box." lightbox="media/how-to-manage-dev-box-pools/dev-box-operations-menu.png":::
## Delete a dev box pool
To delete a dev box pool in the Azure portal:
1. In the search box, enter **projects**. In the list of results, select **Projects**. 1. Open the project from which you want to delete the dev box pool.
-
-1. Select the dev box pool you that you want to delete, and then select **Delete**.
- :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete.png" alt-text="Screenshot of a selected dev box pool in the list of dev box pools, along with the Delete button.":::
+1. Scroll to the right, and select more actions (**...**) for the dev box pool that you want to delete.
+
+1. Select **Delete**.
-1. In the confirmation message, select **Continue**.
+1. In the confirmation message, confirm the deletion by entering the name of the dev box pool that you want to delete, and then select **Delete**.
- :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete-confirm.png" alt-text="Screenshot of the confirmation message for deleting a dev box pool.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete-confirm.png" alt-text="Screenshot of the confirmation message for deleting a dev box pool." lightbox="./media/how-to-manage-dev-box-pools/dev-box-pool-delete-confirm.png":::
## Related content - [Provide access to projects for project admins](./how-to-project-admin.md)-- [2. Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box How To Project Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md
Previously updated : 04/25/2023 Last updated : 01/05/2024
Use the following steps to assign the DevCenter Project Admin role:
1. Select the project that you want to give your team members access to.
- :::image type="content" source="./media/how-to-project-admin/projects-grid.png" alt-text="Screenshot that shows the list of existing projects.":::
+ :::image type="content" source="./media/how-to-project-admin/projects-grid.png" alt-text="Screenshot that shows the list of existing projects." lightbox="./media/how-to-project-admin/projects-grid.png":::
-1. On the left menu, select **Access Control (IAM)**.
+1. On the left, select **Access Control (IAM)**.
- :::image type="content" source="./media/how-to-project-admin/access-control-tab.png" alt-text="Screenshot that shows the access control page for a project.":::
+ :::image type="content" source="./media/how-to-project-admin/access-control-tab.png" alt-text="Screenshot that shows the access control page for a project." lightbox="./media/how-to-project-admin/access-control-tab.png":::
1. Select **Add** > **Add role assignment**. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). | Setting | Value |
- | | |
+ |||
| **Role** | Select **DevCenter Project Admin**. | | **Assign access to** | Select **User, group, or service principal**. | | **Members** | Select the users or groups that need administrative access to the project. |
- :::image type="content" source="media/how-to-project-admin/add-role-assignment-admin.png" alt-text="Screenshot that shows the pane for adding a role assignment.":::
+ :::image type="content" source="media/how-to-project-admin/add-role-assignment-admin.png" alt-text="Screenshot that shows the pane for adding a role assignment." lightbox="media/how-to-project-admin/add-role-assignment-admin.png":::
The users can now manage the project and create dev box pools within it.
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
Azure Database Migration Service prerequisites that are common across all suppor
> Set-AzRoleDefinition -Role $aRole > } >
- > function Update-DmsConributorRole() {
+ > function Update-DmsContributorRole() {
> $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor" > $aRole.Actions = $writerActions > $aRole.NotActions = @()
Azure Database Migration Service prerequisites that are common across all suppor
> New-DmsReaderRole > New-DmsContributorRole > Update-DmsReaderRole
- > Update-DmsConributorRole
+ > Update-DmsContributorRole
> ``` ## Prerequisites for migrating SQL Server to Azure SQL Database
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
description: This article explains how to subscribe to events published by Micro
+ Last updated 12/08/2023
Other useful links:
- [Microsoft Graph API webhooks](/graph/api/resources/webhooks) - [Best practices for working with Microsoft Graph API](/graph/best-practices-concept) - [Microsoft Graph API SDKs](/graph/sdks/sdks-overview)-- [Microsoft Graph API tutorials](/graph/tutorials), which shows how to use Graph API. This article doesn't necessarily include examples for sending events to Event Grid.
+- [Microsoft Graph API tutorials](/graph/tutorials), which shows how to use Graph API. This article doesn't necessarily include examples for sending events to Event Grid.
expressroute Expressroute Howto Reset Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-reset-peering.md
description: Learn how to enable and disable peerings for an Azure ExpressRoute
+ Last updated 12/28/2023
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-custom-route-alert.md
description: This article shows you how to use Azure Automation and Logic Apps t
+ Last updated 12/28/2023
expressroute How To Routefilter Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-routefilter-powershell.md
description: This article describes how to configure route filters for Microsoft
+ Last updated 12/28/2023
expressroute Use S2s Vpn As Backup For Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/use-s2s-vpn-as-backup-for-expressroute-privatepeering.md
description: This page provides architectural recommendations for backing up Azu
+ Last updated 12/28/2023
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-certificates.md
To configure your key vault:
- The provided CA certificate needs to be trusted by your Azure workload. Ensure they are deployed correctly. - Since Azure Firewall Premium is listed as Key Vault [Trusted Service](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services), it allows you to bypass Key Vault internal Firewall and to eliminate any exposure of your Key Vault to the Internet.
+> [!NOTE]
+> Whenever you import a new Firewall CA certificate into Azure Key Vault (either for the first time or replacing an expired CA certification), you should *explicitly* update the Azure Firewall Policy TLS setting with the new certificate.
+ You can either create or reuse an existing user-assigned managed identity, which Azure Firewall uses to retrieve certificates from Key Vault on your behalf. For more information, see [What is managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md) > [!NOTE]
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium - the Azure CLI' description: Learn how to create an Azure Front Door Standard/Premium using Azure CLI. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities. -+ Last updated 6/30/2023
To learn more about WAF policy settings for Front Door, see [Policy settings for
Azure-managed rule sets provide an easy way to protect your application against common security threats.
-Run [az network front-door waf-policy managed-rules add](/cli/azure/network/front-door/waf-policy/managed-rules#az-network-front-door-waf-policy-managed-rules-add) to add managed rules to your WAF Policy. This example adds Microsoft_DefaultRuleSet_1.2 and Microsoft_BotManagerRuleSet_1.0 to your policy.
+Run [az network front-door waf-policy managed-rules add](/cli/azure/network/front-door/waf-policy/managed-rules#az-network-front-door-waf-policy-managed-rules-add) to add managed rules to your WAF Policy. This example adds Microsoft_DefaultRuleSet_2.1 and Microsoft_BotManagerRuleSet_1.0 to your policy.
```azurecli-interactive
az network front-door waf-policy managed-rules add \
--policy-name contosoWAF \ --resource-group myRGFD \ --type Microsoft_DefaultRuleSet \
- --version 1.2
+ --action Block \
+ --version 2.1
``` ```azurecli-interactive
az group delete --name myRGFD
Advance to the next article to learn how to add a custom domain to your Front Door. > [!div class="nextstepaction"]
-> [Add a custom domain](standard-premium/how-to-add-custom-domain.md)
+> [Add a custom domain](standard-premium/how-to-add-custom-domain.md)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
certain criteria can be formed using a **field** expression. The following field
apostrophes. - Where **'\<tagName\>'** is the name of the tag to validate the condition for. - Example: `tags['''My.Apostrophe.Tag''']` where **'My.Apostrophe.Tag'** is the name of the tag.+
+ > [!NOTE]
+ > `tags.<tagName>`, `tags[tagName]`, and `tags[tag.with.dots]` are still acceptable ways of
+ > declaring a tags field. However, the preferred expressions are those listed above.
- property aliases - for a list, see [Aliases](#aliases).
+ > [!NOTE]
+ > In **field** expressions referring to **\[\*\] alias**, each element in the array is evaluated
+ > individually with logical **and** between elements. For more information, see
+ > [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
-> [!NOTE]
-> `tags.<tagName>`, `tags[tagName]`, and `tags[tag.with.dots]` are still acceptable ways of
-> declaring a tags field. However, the preferred expressions are those listed above.
-> [!NOTE]
-> In **field** expressions referring to **\[\*\] alias**, each element in the array is evaluated
-> individually with logical **and** between elements. For more information, see
-> [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
+Conditions that use `field` expressions can replace the legacy policy definition syntax `"source": "action"`, which used to work for write operations. For example, this is no longer supported:
+```json
+{
+ "source": "action",
+ "like": "Microsoft.Network/publicIPAddresses/*"
+}
+```
+
+But the desired behavior can be achieved using `field` logic:
+```json
+{
+ "field": "type",
+ "equals": "Microsoft.Network/publicIPAddresses"
+}
+```
#### Use tags with parameters
hdinsight-aks Sdk Cluster Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/sdk-cluster-creation.md
Title: Manage HDInsight on AKS clusters using .NET SDK (Preview) description: Manage HDInsight on AKS clusters using .NET SDK. + Last updated 11/23/2023
hdinsight Apache Hadoop Dotnet Csharp Mapreduce Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-dotnet-csharp-mapreduce-streaming.md
Title: Use C# with MapReduce on Hadoop in HDInsight - Azure
description: Learn how to use C# to create MapReduce solutions with Apache Hadoop in Azure HDInsight. -+ Last updated 09/14/2023
hdinsight Apache Hadoop Use Hive Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-powershell.md
Title: Use Apache Hive with PowerShell in HDInsight - Azure
description: Use PowerShell to run Apache Hive queries in Apache Hadoop in Azure HDInsight -+ Last updated 09/14/2023
hdinsight Apache Hadoop Use Mapreduce Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-powershell.md
Title: Use MapReduce and PowerShell with Apache Hadoop - Azure HDInsight
description: Learn how to use PowerShell to remotely run MapReduce jobs with Apache Hadoop on HDInsight. -+ Last updated 05/26/2023
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
Title: Customize Azure HDInsight clusters by using script actions
description: Add custom components to HDInsight clusters by using script actions. Script actions are Bash scripts that can be used to customize the cluster configuration. Or add additional services and utilities like Hue, Solr, or R. -+ Last updated 07/31/2023
healthcare-apis Deploy Dicom Services In Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure-data-lake.md
Last updated 11/21/2023 -+ # Deploy the DICOM service with Data Lake Storage (Preview)
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md
Last updated 05/10/2023 + content_well_notification: - AI-contribution
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid.md
Previously updated : 01/22/2022 Last updated : 01/05/2024
For device telemetry events, IoT Hub will create the default [message route](iot
Device connected and device disconnected events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications.
-* For devices connecting using Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md) will have connection states sent automatically.
-* For devices connecting using the Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [AMQP protocol](iot-hub-amqp-support.md), a cloud-to-device link should be created to reduce any delay in accurate connection states.
-* For devices connecting using the .NET [Azure IoT SDK](iot-hub-devguide-sdks.md) with the [MQTT](../iot/iot-mqtt-connect-to-iot-hub.md) or [AMQP](iot-hub-amqp-support.md) protocol wonΓÇÖt send a device connected event until an initial device-to-cloud or cloud-to-device message is sent/received.
-* Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](../iot/iot-mqtt-connect-to-iot-hub.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+For information about monitoring device status with Event Grid, see [Monitor device connection status](./monitor-device-connection-state.md#event-grid).
### Device connection state interval
iot-hub Monitor Device Connection State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-device-connection-state.md
Previously updated : 10/18/2022 Last updated : 01/05/2024 # Monitor device connection status
Device connection state events are available for devices connecting using either
Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging topics. Over AMQP these operations equate to attaching or transferring a message on the appropriate link paths.
-IoT Hub doesn't report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60-second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60-second window.
- ### Event Grid limitations Using Event Grid to monitor your device status comes with the following limitations: * Event Grid doesn't report each individual device connect and disconnect event. Instead, it polls for device status every 60 seconds and publishes the most recent connection state if there was a state change. For this reason, state change reports may be delayed up to one minute and individual state changes may be unreported if multiple events happen within the 60-second window.
-* Devices that use MQTT start reporting device status automatically. However, devices that use AMQP need [cloud-to-device link](iot-hub-amqp-support.md#invoke-cloud-to-device-messages-service-client) before they can report device status.
-* The IoT C SDK doesn't have a connect method. Customers must send telemetry to begin reporting accurate device connection states.
+* Devices that use AMQP need [cloud-to-device link](iot-hub-amqp-support.md#invoke-cloud-to-device-messages-service-client) before they can report device status.
* Event Grid exposes a public endpoint that can't be hidden. If any of these limitations affect your ability to use Event Grid for device status monitoring, then you should consider building a custom device heartbeat pattern instead.
iot-operations Howto Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-manage-secrets.md
# Previously updated : 12/06/2023 Last updated : 12/19/2023 - ignite-2023
Azure IoT Operations supports Azure Key Vault for storing secrets and certificat
For more information, see [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli).
+## Configure service principal and Azure Key Vault upfront
+
+If the Azure account executing the `az iot ops init` command does not have permissions to query the Azure Resource Graph and create service principals, you can prepare these upfront and use extra arguments when running the CLI command as described in [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli).
+
+### Configure service principal for interacting with Azure Key Vault via Microsoft Entra ID
+
+Follow these steps to create a new Application Registration that will be used by the AIO application to authenticate to Key Vault.
+
+First, register an application with Microsoft Entra ID.
+
+1. In the Azure portal search bar, search for and select **Microsoft Entra ID**.
+
+1. Select **App registrations** from the **Manage** section of the Microsoft Entra ID menu.
+
+1. Select **New registration**.
+
+1. On the **Register an application** page, provide the following information:
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Provide a name for your application. |
+ | **Supported account types** | Ensure that **Accounts in this organizational directory only (<YOUR_TENANT_NAME> only - Single tenant)** is selected. |
+ | **Redirect URI** | Select **Web** as the platform. You can leave the web address empty. |
+
+1. Select **Register**.
+
+ When your application is created, you are directed to its resource page.
+
+1. Copy the **Application (client) ID** from the app registration overview page. You'll use this value as an argument when running Azure IoT Operations deployment.
+
+Next, give your application permissions for key vault.
+
+1. On the resource page for your app, select **API permissions** from the **Manage** section of the app menu.
+
+1. Select **Add a permission**.
+
+1. On the **Request API permissions** page, scroll down and select **Azure Key Vault**.
+
+1. Select **Delegated permissions**.
+
+1. Check the box to select **user_impersonation** permissions.
+
+1. Select **Add permissions**.
+
+Create a client secret that will be added to your Kubernetes cluster to authenticate to your key vault.
+
+1. On the resource page for your app, select **Certificates & secrets** from the **Manage** section of the app menu.
+
+1. Select **New client secret**.
+
+1. Provide an optional description for the secret, then select **Add**.
+
+1. Copy the **Value** and **Secret ID** from your new secret. You'll use these values later below.
+
+Retrieve the service principal Object Id
+
+1. On the **Overview** page for your app, under the section **Essentials**, click on the **Application name** link under **Managed application in local directory**. This opens the Enterprise Application properties. Copy the Object Id to use when you run `az iot ops init`.
+
+### Create an Azure Key Vault
+
+Create a new Azure Key Vault service and ensure it has the **Permission Model** set to Vault access policy.
+
+```bash
+az keyvault create --enable-rbac-authorization false --name "<your unique key vault name>" --resource-group "<the name of the resource group>"
+```
+If you have an existing key vault, you can change the permission model by executing the following:
+
+```bash
+az keyvault update --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --enable-rbac-authorization false
+```
+You will need the Key Vault resource ID when you run `az iot ops init`. To retrieve the resource ID, run:
+
+```bash
+az keyvault show --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --query id -o tsv
+```
+
+### Set service principal access policy in Azue Key Vault
+
+The newly created service principal needs **Secret** `list` and `get` access policy for the Azure IoT Operations to work with the secret store.
+
+Run the following to assign **secret** `get` and `list` permissions to the service principal.
+
+```bash
+az keyvault set-policy --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --object-id <Object ID copied from Enterprise Application SP in Microsoft Entra ID> --secret-permissions get list --key-permissions get list
+```
+
+### Pass service principal and Key Vault arguments to Azure IoT Operations deployment
+
+When following the guide [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli), you will need to pass in additional flags to the `az iot ops init` command in order to use the pre-configured service principal and key vault.
+
+The following example shows how to prepare the cluster for Azure IoT Operations without fully deploying it by using `--no-deploy` flag. You can also run the command without this argument for a default Azure IoT Operations deployment.
+
+```bash
+az iot ops init --name "<your unique key vault name>" --resource-group "<the name of the resource group>" \
+ --kv-id <Key Vault Resource ID> \
+ --sp-app-id <Application registration App ID (client ID) from Microsoft Entra ID> \
+ --sp-object-id <Object ID copied from Enterprise Application in Microsoft Entra ID> \
+ --sp-secret "<Client Secret from App registration in Microsoft Entra ID>" \
+ --no-deploy
+```
+ ## Add a secret to an Azure IoT Operations component Once you have the secret store set up on your cluster, you can create and add Azure Key Vault secrets.
iot-operations Howto Configure Destination Grpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-destination-grpc.md
When you send data to a gRPC endpoint from a destination stage:
## Prerequisites
-To configure and use an aggregate pipeline stage, you need:
+To configure and use a destination pipeline stage, you need:
- A deployed instance of Azure IoT Data Processor (preview). - A [gRPC](https://grpc.io/docs/what-is-grpc/) server that's accessible from the Data Processor instance.
iot-operations Howto Configure Destination Mq Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-destination-mq-broker.md
Use the _MQ_ destination to publish processed messages to an MQTT broker, such a
## Prerequisites
-To configure and use an Azure Data Explorer destination pipeline stage, you need a deployed instance of Azure IoT Data Processor (preview).
+To configure and use a destination pipeline stage, you need a deployed instance of Azure IoT Data Processor (preview).
## Configure the destination stage
load-testing How To Configure Load Test Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-load-test-cicd.md
description: 'This article shows how to run your load tests with Azure Load Test
+ Last updated 06/05/2023
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
You can specify the virtual network configuration settings in the load test crea
1. (Optional) Check **Disable Public IP deployment** if you don't want to deploy a public IP address, load balancer, and network security group in your subnet.
- When you select this option, ensure that there is an alternative mechanism like [Azure NAT Gateway](/azure/nat-gateway/nat-overview#outbound-connectivity), [Azure Firewall](/azure/firewall/tutorial-firewall-deploy-portal), or a [network virtual appliance (NVA)](/azure/virtual-wan/scenario-route-through-nvas-custom) to enable outbound traffic routing from the subnet.
+ When you select this option, ensure that there is an alternative mechanism like [Azure NAT Gateway](/azure/nat-gateway/nat-overview#outbound-connectivity), [Azure Firewall](/azure/firewall/tutorial-firewall-deploy-portal), or a [network virtual appliance (NVA)](/azure/virtual-wan/scenario-route-through-nvas-custom) to enable outbound traffic routing from the subnet.
1. Review or fill the load test information. Follow these steps to [create or manage a test](./how-to-create-manage-test.md).
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## December 20, 2023
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `23.12.18`
+
+Main changes:
+
+- `numpy` version `1.22.3`
+- `pytz` version `2022.6`
+- `torch` version `1.12.0`
+- `certifi` version `2023.7.2`
+- `azure-mgmt-network` to version `25.1.0`
+- `scikit-learn` version `1.0.2`
+- `scipy` version `1.9.2`
+- `accuracy`
+- `pickle5`
+- `pillow` version `10.1.0`
+- `experimental`
+- `ipykernel` version `6.14.0`
+- `en_core_web_sm`
+
+## December 18, 2023
+
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
+
+Version `23.12.11`
+
+Main changes:
+
+- SDK `1.54.0`
+- numba
+- Scipy
+- `azure-core` to version `1.29.4`
+- `azure-identity` to version `1.14.0`
+- `azure-storage-queue` to version `12.7.2`
+ ## December 5, 2023 DSVM offering for [Data Science VM ΓÇô Windows 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2022?tab=Overview) is now generally available in the marketplace.
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
reviewer: msakande Last updated 12/15/2023 -+ # Authenticate clients for online endpoints
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
-- Previously updated : 01/07/2024++ Last updated : 01/08/2024
You can upgrade the extension to the latest version:
### Installation on Linux
-If you're using Linux, the fastest way to install the necessary CLI version and the Machine Learning extension is:
+If you're using Debian or Ubuntu, the fastest way to install the necessary CLI version and the Machine Learning extension is:
:::code language="bash" source="~/azureml-examples-main/cli/misc.sh" id="az_extension_install_linux":::
-For more, see [Install the Azure CLI for Linux](/cli/azure/install-azure-cli-linux).
+For information on how to install on other Linux distributions, visit [Install the Azure CLI for Linux](/cli/azure/install-azure-cli-linux).
## Set up
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Previously updated : 09/06/2022 Last updated : 01/08/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
For managed online endpoints, Azure Machine Learning reserves 20% of your comput
There are certain VM SKUs that are exempted from extra quota reservation. To view the full list, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
-Azure Machine Learning provides a [shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota) pool from which all users can access quota to perform testing for a limited time. When you use the studio to deploy Llama models (from the model catalog) to a managed online endpoint, Azure Machine Learning allows you to access this shared quota for a short time.
+Azure Machine Learning provides a [shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota) pool from which all users can access quota to perform testing for a limited time. When you use the studio to deploy Llama-2, Phi, Nemotron, Mistral, Dolly and Deci-DeciLM models from the model catalog to a managed online endpoint, Azure Machine Learning allows you to access this shared quota for a short time.
-To deploy a _Llama-2-70b_ or _Llama-2-70b-chat_ model, however, you must have an [Enterprise Agreement subscription](/azure/cost-management-billing/manage/create-enterprise-subscription) before you can deploy using the shared quota. For more information on how to use the shared quota for online endpoint deployment, see [How to deploy foundation models using the studio](how-to-use-foundation-models.md#deploying-using-the-studio).
+For more information on how to use the shared quota for online endpoint deployment, see [How to deploy foundation models using the studio](how-to-use-foundation-models.md#deploying-using-the-studio).
## Prepare your system
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
Since the scoring script and environment are automatically included with the fou
##### Shared quota
-If you're deploying a Llama model from the model catalog but don't have enough quota available for the deployment, Azure Machine Learning allows you to use quota from a shared quota pool for a limited time. For _Llama-2-70b_ and _Llama-2-70b-chat_ model deployment, access to the shared quota is available only to customers with [Enterprise Agreement subscriptions](/azure/cost-management-billing/manage/create-enterprise-subscription). For more information on shared quota, see [Azure Machine Learning shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota).
+If you're deploying a Llama-2, Phi, Nemotron, Mistral, Dolly or Deci-DeciLM model from the model catalog but don't have enough quota available for the deployment, Azure Machine Learning allows you to use quota from a shared quota pool for a limited time. For more information on shared quota, see [Azure Machine Learning shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota).
:::image type="content" source="media/how-to-use-foundation-models/deploy-llama-model-with-shared-quota.png" alt-text="Screenshot showing the option to deploy a Llama model temporarily, using shared quota." lightbox="media/how-to-use-foundation-models/deploy-llama-model-with-shared-quota.png":::
managed-grafana How To Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-azure-data-explorer.md
description: In this guide, learn how to connect an Azure Data Explorer datasour
+ zone_pivot_groups: azure-red-hat-openshift-service-principal Last updated 11/29/2023
managed-grafana How To Connect Azure Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-azure-monitor-workspace.md
In this guide, learn how to connect an Azure Monitor workspace to Grafana direct
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - An Azure Managed Grafana instance in the Standard tier. [Create a new instance](quickstart-managed-grafana-portal.md) if you don't have one.-- An [Azure Monitor workspace with Prometheus data](../azure-monitor/containers/prometheus-metrics-enable.md).
+- An [Azure Monitor workspace with Prometheus data](../azure-monitor/containers/monitor-kubernetes.md).
## Add a new role assignment
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
The best way to validate availability of successfully completed backups is to vi
In the Azure portal, under the Monitoring tab - Metrics section, you can find the [Backup Storage Used](./concepts-monitoring.md) metric, which can help you monitor the total backup usage. - **What happens to my backups if I delete my server?** If you delete the server, all backups that belong to the server are also deleted and can't be recovered. To protect server resources post deployment from accidental deletion or unexpected changes, administrators can use [management locks](../../azure-resource-manager/management/lock-resources.md).
+- **What happens to my backups when I restore a server?**
+If you restore a server, then it always results in a creation of a net new server that has been restored using original server's backups. The old backup from the original server is not copied over to the newly restored server and it remains with the original server. However, for the newly created server the first snapshot backup is scheduled immediately after a server is created and the service ensures daily automated backups are taken and stored as per configured server retention period.
- **How am I charged and billed for my use of backups?** Azure Database for MySQL flexible server provides up to 100% of your provisioned server storage as backup storage at no added cost. Any more backup storage used is charged in GB per month as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/server/). Backup storage billing is also governed by the backup retention period selected and backup redundancy option chosen, apart from the transactional activity on the server, which impacts the total backup storage used directly. - **How are backups retained for stopped servers?**
operator-nexus Howto Monitor Naks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-naks-cluster.md
Look for a Provisioning State of "Succeeded" for the extension. The "k8s-extensi
#### Customize logs & metrics collection
-Container Insights provides end-users functionality to fine-tune the collection of logs and metrics from Nexus Kubernetes Clusters--[Configure Container insights agent data collection](../azure-monitor/containers/container-insights-agent-config.md).
+Container Insights provides end-users functionality to fine-tune the collection of logs and metrics from Nexus Kubernetes Clusters--[Configure Container insights agent data collection](../azure-monitor/containers/container-insights-data-collection-configmap.md).
## Extra resources
operator-service-manager Publisher Resource Preview Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/publisher-resource-preview-management.md
Immutable artifacts are tested artifacts that can't be modified or overwritten.
### Update Artifact Manifest state
+ Use the following Azure CLI command to change the state of a artifact manifest resource.
- ### HTTP Method: POST URL
-
-```http
-https://management.azure.com/{artifactManifestResourceId}/updateState?api-version=2023-09-01
-```
-
- Where artifactManifestResourceId is the full resource ID of the Artifact Manifest resource
-
- ### Request body
-
-```json
-{
- "artifactManifestState": "Uploaded"
-}
-```
-
-### Submit POST
-
-Submit the POST using `az rest` in the Azure CLI.
- ```azurecli
-az rest --method post --uri {artifactManifestResourceId}/updateState?api-version=2023-09-01 --body "{\"artifactManifestState\": \"Uploaded\"}"
-```
-
- Where *{artifactManifestResourceId}* is the full resource ID of the Artifact Manifest resource
-
- Then issue the get command to check that the artifactManifestState change is complete.
-
-```azurecli
- az rest --method get --uri {artifactManifestResourceId}?api-version=2023-09-01
+ az aosm publisher artifact-manifest update-state \
+ --resource-group <myResourceGroupName> \
+ --publisher-name <myPublisherName> \
+ --artifact-store-name <myArtifactStoreName> \
+ --name <myArtifactManifestName> \
+ --state Uploaded
``` ## Network Function Definition and Network Service Design state machine
az rest --method post --uri {artifactManifestResourceId}/updateState?api-version
- Deprecated state is a terminal state but can be reversed. ## Update Network Function definition version state-
-Use the following API to update the state of a Network Function Definition Version (NFDV).
-
-### HTTP Method: POST URL
-
-```http
-https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.HybridNetwork/publishers/{publisherName}/networkfunctiondefinitiongroups/{networkfunctiondefinitiongroups}/networkfunctiondefinitionversions/{networkfunctiondefinitionversions}/updateState?api-version=2023-09-01
-```
-
-### URI parameters
-
-The following table describes the parameters used with the preceding URL.
-
-|Name |Description |
-|||
-|subscriptionId | The subscription ID.
-|resourceGroupName | The name of the resource group. |
-|publisherName | The name of the publisher. |
-|networkfunctiondefinitiongroups | The name of the network function definition groups.
-|networkfunctiondefinitionversions | The network function definition version. |
-|api-version | The API version to use for this operation. |
--
-### Request body
-
-```json
-{
- "versionState": "Active | Deprecated"
-}
-```
-### Submit post
-
-Submit the POST using `az rest` in the Azure CLI.
-
-```azurecli
- az rest --method post --uri {nfdvresourceId}/updateState?api-version=2023-09-01 --body "{\"versionState\": \"Active\"}"
-```
- Where *{nfdvresourceId}* is the full resource ID of the Network Function Definition Version
-
-Then issue the get command to check that the versionState change is complete.
+Use the following Azure CLI command to change the state of a Network Function Definition Version resource.
```azurecli
- az rest --method get --uri {nfdvresourceId}?api-version=2023-09-01
+ az aosm publisher network-function-definition version update-state \
+ --resource-group <myResourceGroup> \
+ --publisher-name <myPublisherName> \
+ --group-name <myNetworkFunctionDefinitionGroupName> \
+ --version-name <myNetworkFunctionDefinitionVersionName> \
+ --version-state Active | Deprecated
``` ## Update Network Service Design Version (NSDV) version state-
-Use the following API to update the state of a Network Service Design Version (NSDV).
-
-### HTTP Method: POST URL
-
-```http
-https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.HybridNetwork/publishers/{publisherName}/networkservicedesigngroups/{nsdName}/networkservicedesignversions/{nsdVersion}/updateState?api-version=2023-09-01
-```
-
-### URI parameters
-
-The following table describes the parameters used with the preceding URL.
-
-|Name |Description |
-|||
-|subscriptionId | The subscription ID.
-|resourceGroupName | The name of the resource group. |
-|publisherName | The name of the publisher. |
-|nsdName | The name of the network service design.
-|nsdVersion | The network service design version. |
-|api-version | The API version to use for this operation. |
--
-### Request body
-
-```json
-{
- "versionState": "Active | Deprecated"
-}
-```
-### Submit post
-
-Submit the POST using `az rest` in the Azure CLI.
-
-```azurecli
-az rest --method post --uri {nsdvresourceId}/updateState?api-version=2023-09-01 --body "{\"versionState\": \"Active\"}"
-```
-Where *{nsdvresourceId}* is the full resource ID of the Network Service Design
-
-Then issue the get command to check that the versionState change is complete.
+Use the following Azure CLI command to change the state of a Network Service Design Version resource.
```azurecli
- az rest --method get --uri {nsdvresourceId}?api-version=2023-09-01
+ az aosm publisher network-service-design version update-state \
+ --resource-group <myResourceGroup> \
+ --publisher-name <myPublisherName> \
+ --group-name <myNetworkServiceDesignGroupName> \
+ --version-name <myNetworkServiceDesignVersionName> \
+ --version-state Active | Deprecated
```
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Title: Extensions - Azure Database for PostgreSQL - Flexible Server
description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server Previously updated : 12/18/2023 Last updated : 1/8/2024
The following extensions are available in Azure Database for PostgreSQL - Flexib
|[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) |Used to parse an address into constituent elements. |N/A |3.1.1 |3.1.1 |3.1.1 |3.0.0 |2.5.1 | |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html)|Address Standardizer US dataset example |N/A |3.1.1 |3.1.1 |3.1.1 |3.0.0 |2.5.1 | |[amcheck](https://www.postgresql.org/docs/13/amcheck.html) |Functions for verifying relation integrity |1.3 |1.2 |1.2 |1.2 |1.2 |1.1 |
+|[azure_ai](./generative-ai-azure-overview.md) |Azure OpenAI and Cognitive Services integration for PostgreSQL |0.1.0 |0.1.0 |0.1.0 |0.1.0 |N/A |N/A |
+|[azure_storage](../../postgresql/flexible-server/concepts-storage-extension.md) |extension to export and import data from Azure Storage |1.3 |1.3 |1.3 |1.3 |1.3 |N/A |
|[bloom](https://www.postgresql.org/docs/13/bloom.html) |Bloom access method - signature file based index |1 |1 |1 |1 |1 |1 | |[btree_gin](https://www.postgresql.org/docs/13/btree-gin.html) |Support for indexing common datatypes in GIN |1.3 |1.3 |1.3 |1.3 |1.3 |1.3 | |[btree_gist](https://www.postgresql.org/docs/13/btree-gist.html) |Support for indexing common datatypes in GiST |1.7 |1.5 |1.5 |1.5 |1.5 |1.5 |
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Because replicas are read-only, they don't directly reduce write-capacity burden
### Considerations
-Read replicas are primarily designed for scenarios where offloading queries is beneficial, and a slight lag is manageable. They are optimized to provide near realtime updates from the primary for most workloads, making them an excellent solution for read-heavy scenarios. However, it's important to note that they are not intended for synchronous replication scenarios requiring up-to-the-minute data accuracy. While the data on the replica eventually becomes consistent with the primary, there may be a delay, which typically ranges from a few seconds to minutes, and in some heavy workload or high-latency scenarios, this could extend to hours. Typically, read replicas in the same region as the primary has less lag than geo-replicas, as the latter often deals with geographical distance-induced latency. For more insights into the performance implications of geo-replication, refer to [Geo-replication](#geo-replication) section. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
+Read replicas are primarily designed for scenarios where offloading queries is beneficial, and a slight lag is manageable. They are optimized to provide near real time updates from the primary for most workloads, making them an excellent solution for read-heavy scenarios. However, it's important to note that they are not intended for synchronous replication scenarios requiring up-to-the-minute data accuracy. While the data on the replica eventually becomes consistent with the primary, there may be a delay, which typically ranges from a few seconds to minutes, and in some heavy workload or high-latency scenarios, this could extend to hours. Typically, read replicas in the same region as the primary has less lag than geo-replicas, as the latter often deals with geographical distance-induced latency. For more insights into the performance implications of geo-replication, refer to [Geo-replication](#geo-replication) section. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
> [!NOTE] > For most workloads, read replicas offer near-real-time updates from the primary. However, with persistent heavy write-intensive primary workloads, the replication lag could continue to grow and might only be able to catch up with the primary. This might also increase storage usage at the primary as the WAL files are only deleted once received at the replica. If this situation persists, deleting and recreating the read replica after the write-intensive workloads are completed, you can bring the replica back to a good state for lag.
Read replicas are primarily designed for scenarios where offloading queries is b
A read replica can be created in the same region as the primary server and in a different one. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Azure in China](https://learn.microsoft.com/azure/china/overview-operations). The special regions now supported are:
+You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Microsoft Azure operated by 21Vianet](https://learn.microsoft.com/azure/china/overview-operations). The special regions now supported are:
- **Azure Government regions**: - US Gov Arizona - US Gov Texas - US Gov Virginia -- **Azure in China regions**:
+- **Microsoft Azure operated by 21Vianet regions**:
- China North 3 - China East 3
A primary server for Azure Database for PostgreSQL - Flexible Server can be depl
When you start the create replica workflow, a blank Azure Database for the PostgreSQL server is created. The new server is filled with the data on the primary server. For the creation of replicas in the same region, a snapshot approach is used. Therefore, the time of creation is independent of the size of the data. Geo-replicas are created using the base backup of the primary instance, which is then transmitted over the network; therefore, the creation time might range from minutes to several hours, depending on the primary size.
-In Azure Database for PostgreSQL - Flexible Server, the creation operation of replicas is considered successful only when the entire backup of the primary instance copies to the replica destination and the transaction logs synchronize up to the threshold of a maximum 1-GB lag.
+In Azure Database for PostgreSQL - Flexible Server, the creation operation of replicas is considered successful only when the entire backup of the primary instance copies to the replica destination and the transaction logs synchronize up to the threshold of a maximum 1GB lag.
To achieve a successful create operation, avoid making replicas during times of high transactional load. For example, it's best to avoid creating replicas during migrations from other sources to Azure Database for PostgreSQL - Flexible Server or during excessive bulk load operations. If you're migrating data or loading large amounts of data right now, it's best to finish this task first. After completing it, you can then start setting up the replicas. Once the migration or bulk load operation has finished, check whether the transaction log size has returned to its normal size. Typically, the transaction log size should be close to the value defined in the max_wal_size server parameter for your instance. You can track the transaction log storage footprint using the [Transaction Log Storage Used](concepts-monitoring.md#default-metrics) metric, which provides insights into the amount of storage used by the transaction log. By monitoring this metric, you can ensure that the transaction log size is within the expected range and that the replica creation process might be started.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Whether you're just starting out or looking to refresh your knowledge, this intr
## Overview
-Azure Database for PostgreSQL - Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. The service generally provides more flexibility and server configuration customizations based on user requirements. The flexible server architecture allows users to collocate the database engine with the client tier for lower latency and choose high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with the ability to stop/start your server and a burstable compute tier ideal for workloads that don't need full compute capacity continuously. The service supports the community version of [PostgreSQL 11, 12, 13, and 14](./concepts-supported-versions.md). The service is available in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+Azure Database for PostgreSQL - Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. The service generally provides more flexibility and server configuration customizations based on user requirements. The flexible server architecture allows users to collocate the database engine with the client tier for lower latency and choose high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with the ability to stop/start your server and a burstable compute tier ideal for workloads that don't need full compute capacity continuously. The service supports the community version of [PostgreSQL 11, 12, 13, 14, 15 and 16](./concepts-supported-versions.md). The service is available in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
:::image type="content" source="./media/overview/overview-flexible-server.png" alt-text="Diagram of Flexible Server - Overview." lightbox="./media/overview/overview-flexible-server.png":::
postgresql How To Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-dump-and-restore.md
Title: Dump and restore - Azure Database for PostgreSQL - Single Server
-description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server.
+ Title: Dump and restore - Azure Database for PostgreSQL - Flexible Server
+description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server or Flexible Server.
- Previously updated : 09/22/2020+ Last updated : 01/04/2024 # Migrate your PostgreSQL database by using dump and restore [!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. Then use [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by `pg_dump`.
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. The method to restore the database depends on the format of the dump you choose. If your dump is taken with the plain format (which is the default `-Fp`, so no specific option needs to be specified), then the only option to restore it is by using [psql](https://www.postgresql.org/docs/current/app-psql.html), as it outputs a plain text file. For the other three dump methods: custom, directory, and tar, [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) should be used.
+
+> [!IMPORTANT]
+> The instructions and commands provided in this article are designed to be executed in bash terminals. This includes environments such as Windows Subsystem for Linux (WSL), Azure Cloud Shell, and other bash-compatible interfaces. Please ensure you are using a bash terminal to follow the steps and execute the commands detailed in this guide. Using a different type of terminal or shell environment may result in differences in command behavior and may not produce the intended outcomes.
++
+In this article, we focus on the plain (default) and directory formats. The directory format is useful as it allows you to use multiple cores for processing, which can significantly enhance efficiency, especially for large databases.
+
+The Azure portal streamlines this process via the Connect blade by offering preconfigured commands that are tailored to your server, with values substituted with your user data. It's important to note that the Connect blade is only available for Azure Database for PostgreSQL - Flexible Server and not for Single Server. Here's how you can use this feature:
+
+1. **Access Azure portal**: First, go to the Azure portal and choose the Connect blade.
+
+ :::image type="content" source="./media/how-to-migrate-using-dump-and-restore/portal-connect-blade.png" alt-text="Screenshot showing the placement of Connect blade in Azure portal." lightbox="./media/how-to-migrate-using-dump-and-restore/portal-connect-blade.png":::
+
+2. **Select your database**: In the Connect blade, you find a dropdown list of your databases. Select the database you wish to perform a dump from.
+
+ :::image type="content" source="./media/how-to-migrate-using-dump-and-restore/dropdown-list-of-databases.png" alt-text="Screenshot showing the dropdown where specific database can be chosen." lightbox="./media/how-to-migrate-using-dump-and-restore/dropdown-list-of-databases.png":::
+
+3. **Choose the appropriate method**: Depending on your database size, you can choose between two methods:
+ - **`pg_dump` & `psql` - using singular text file**: Ideal for smaller databases, this option utilizes a single text file for the dump and restore process.
+ - **`pg_dump` & `pg_restore` - using multiple cores**: For larger databases, this method is more efficient as it uses multiple cores to handle the dump and restore process.
+
+ :::image type="content" source="./media/how-to-migrate-using-dump-and-restore/different-dump-methods.png" alt-text="Screenshot showing two possible dump methods." lightbox="./media/how-to-migrate-using-dump-and-restore/different-dump-methods.png":::
+
+4. **Copy and paste commands**: The portal provides you with ready to use `pg_dump` and `psql` or `pg_restore` commands. These commands come with values already substituted according to the server and database you've chosen. Copy and paste these commands.
## Prerequisites
+If you're using a Single Server, or don't have access to the Flexible Server portal, read through this documentation page. It contains information that is similar to what is presented in the Connect blade for Flexible Server on the portal.
To step through this how-to guide, you need: - An [Azure Database for PostgreSQL server](../single-server/quickstart-create-server-database-portal.md), including firewall rules to allow access.-- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
+- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html), [psql](https://www.postgresql.org/docs/current/app-psql.html), [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) and [pg_dumpall](https://www.postgresql.org/docs/current/app-pg-dumpall.html) in case you want to migrate with roles and permissions, command-line utilities installed.
+- **Decide on the location for the dump**: Choose the place you want to perform the dump from. It can be done from various locations, such as a separate VM, [cloud shell](../../cloud-shell/overview.md) (where the command-line utilities are already installed, but might not be in the appropriate version, so always check the version using, for instance, `psql --version`), or your own laptop. Always keep in mind the distance and latency between the PostgreSQL server and the location from which you're running the dump or restore.
+
+> [!IMPORTANT]
+> It is essential to use the `pg_dump`, `psql`, `pg_restore` and `pg_dumpall` utilities that are either of the same major version or a higher major version than the database server you are exporting data from or importing data to. Failing to do so may result in unsuccessful data migration. If your target server has a higher major version than the source server, use utilities that are either the same major version or higher than the target server.
++
+> [!NOTE]
+> It's important to be aware that `pg_dump` can export only one database at a time. This limitation applies regardless of the method you have chosen, whether it's using a singular file or multiple cores.
++
+## Dumping users and roles with `pg_dumpall -r`
+`pg_dump` is used to extract a PostgreSQL database into a dump file. However, it's crucial to understand that `pg_dump` does not dump roles or users definitions, as these are considered global objects within the PostgreSQL environment. For a comprehensive migration, including users and roles, you need to use `pg_dumpall -r`.
+This command allows you to capture all role and user information from your PostgreSQL environment. If you're migrating within databases on the same server, please feel free to skip this step and move to the [Create a new database](#create-a-new-database) section.
+
+```bash
+pg_dumpall -r -h <server name> -U <user name> > roles.sql
+```
+
+For example, if you have a server named `mydemoserver` and a user named `myuser` run the following command:
+```bash
+pg_dumpall -r -h mydemoserver.postgres.database.azure.com -U myuser > roles.sql
+```
+
+If you're using a Single Server, your username includes the server name component. Therefore, instead of `myuser`, use `myuser@mydemoserver`.
++
+### Dumping Roles from a Flexible Server
+In a Flexible Server environment, enhanced security measures mean users don't have access to the pg_authid table, which is where role passwords are stored. This restriction affects how you perform a roles dump, as the standard `pg_dumpall -r` command attempts to access this table for passwords and fail due to lack of permission.
+
+When dumping roles from a Flexible Server, it's crucial to include the `--no-role-passwords` option in your `pg_dumpall` command. This option prevents `pg_dumpall` from attempting to access the `pg_authid` table, which it cannot read due to security restrictions.
+
+To successfully dump roles from a Flexible Server, use the following command:
+
+```bash
+pg_dumpall -r --no-role-passwords -h <server name> -U <user name> > roles.sql
+```
+
+For example, if you have a server named `mydemoserver`, a user named `myuser`, run the following command:
+
+```bash
+pg_dumpall -r --no-role-passwords -h mydemoserver.postgres.database.azure.com -U myuser > roles.sql
+```
+
+### Cleaning up the roles dump
+When migrating the output file `roles.sql` might include certain roles and attributes that aren't applicable or permissible in the new environment. Here's what you need to consider:
+
+- **Removing attributes that can be set only by superusers**: If migrating to an environment where you don't have superuser privileges, remove attributes like `NOSUPERUSER` and `NOBYPASSRLS` from the roles dump.
+
+- **Excluding service-specific users**: Exclude Single Server service users, such as `azure_superuser` or `azure_pg_admin`. These are specific to the service and will be created automatically in the new environment.
+
+Use the following `sed` command to clean up your roles dump:
+
+```bash
+sed -i '/azure_superuser/d; /azure_pg_admin/d; /azuresu/d; /^CREATE ROLE replication/d; /^ALTER ROLE replication/d; /^ALTER ROLE/ {s/NOSUPERUSER//; s/NOBYPASSRLS//;}' roles.sql
+```
+
+This command deletes lines containing `azure_superuser`, `azure_pg_admin`, `azuresu`, lines starting with `CREATE ROLE replication` and `ALTER ROLE replication`, and removes the `NOSUPERUSER` and `NOBYPASSRLS` attributes from `ALTER ROLE` statements.
## Create a dump file that contains the data to be loaded
+To export your existing PostgreSQL database on-premises or in a VM to an sql script file, run the following command in your existing environment:
-To back up an existing PostgreSQL database on-premises or in a VM, run the following command:
+#### [pg_dump & psql - using singular text file](#tab/psql)
+```bash
+pg_dump <database name> -h <server name> -U <user name> > <database name>_dump.sql
+```
+For example, if you have a server named `mydemoserver`, a user named `myuser` and a database called `testdb`, run the following command:
```bash
-pg_dump -Fc -v --host=<host> --username=<name> --dbname=<database name> -f <database>.dump
+pg_dump testdb -h mydemoserver.postgres.database.azure.com -U myuser > testdb_dump.sql
```
-For example, if you have a local server and a database called **testdb** in it, run:
+
+#### [pg_dump & pg_restore - using multiple cores](#tab/pgrestore)
```bash
-pg_dump -Fc -v --host=localhost --username=masterlogin --dbname=testdb -f testdb.dump
+pg_dump -Fd -j <number of cores> <database name> -h <server name> -U <user name> -f <database name>.dump
```
-## Restore the data into the target database
+In these commands, the `-j` option stands for the number of cores you wish to use for the dump process. You can adjust this number based on how many cores are available on your PostgreSQL server and how many you would like to allocate for the dump process. Feel free to change this setting depending on your server's capacity and your performance requirements.
-After you've created the target database, you can use the `pg_restore` command and the `--dbname` parameter to restore the data into the target database from the dump file.
+For example, if you have a server named `mydemoserver`, a user named `myuser` and a database called `testdb`, and you want to use two cores for the dump, run the following command:
```bash
-pg_restore -v --no-owner --host=<server name> --port=<port> --username=<user-name> --dbname=<target database name> <database>.dump
+pg_dump -Fd -j 2 testdb -h mydemoserver.postgres.database.azure.com -U myuser -f testdb.dump
```
-Including the `--no-owner` parameter causes all objects created during the restore to be owned by the user specified with `--username`. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/app-pgrestore.html).
++
+If you're using a Single Server, your username includes the server name component. Therefore, instead of `myuser`, use `myuser@mydemoserver`.
-> [!NOTE]
-> On Azure Database for PostgreSQL servers, TLS/SSL connections are on by default. If your PostgreSQL server requires TLS/SSL connections, but doesn't have them, set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error might read: "FATAL: SSL connection is required. Please specify SSL options and retry." In the Windows command line, run the command `SET PGSSLMODE=require` before running the `pg_restore` command. In Linux or Bash, run the command `export PGSSLMODE=require` before running the `pg_restore` command.
->
-In this example, restore the data from the dump file **testdb.dump** into the database **mypgsqldb**, on target server **mydemoserver.postgres.database.azure.com**.
+## Restore the data into the target database
-Here's an example for how to use this `pg_restore` for Single Server:
+### Restore roles and users
+Before restoring your database objects, make sure you have properly dumped and cleaned up the roles. If you're migrating within databases on the same server, both dumping the roles and restoring them may not be necessary. However, for migrations across different servers or environments, this step is crucial.
+
+To restore the roles and users into the target database, use the following command:
```bash
-pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb testdb.dump
+psql -f roles.sql -h <server_name> -U <user_name>
```
-Here's an example for how to use this `pg_restore` for Flexible Server:
+Replace `<server_name>` with the name of your target server and `<user_name>` with your username. This command uses the `psql` utility to execute the SQL commands contained in the `roles.sql` file, effectively restoring the roles and users to your target database.
+
+For example, if you have a server named `mydemoserver`, a user named `myuser`, run the following command:
```bash
-pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin --dbname=mypgsqldb testdb.dump
+psql -f roles.sql -h mydemoserver.postgres.database.azure.com -U myuser
```
-## Optimize the migration process
+If you're using a Single Server, your username includes the server name component. Therefore, instead of `myuser`, use `myuser@mydemoserver`.
-One way to migrate your existing PostgreSQL database to Azure Database for PostgreSQL is to back up the database on the source and restore it in Azure. To minimize the time required to complete the migration, consider using the following parameters with the backup and restore commands.
+> [!NOTE]
+> If you already have users with the same names on your Single Server or on-premises server from which you are migrating, and your target server, be aware that this restoration process might change the passwords for these roles. Consequently, any subsequent commands you need to execute may require the updated passwords. This does not apply if your source server is a Flexible Server, as Flexible Server does not allow dumping passwords for users due to enhanced security measures.
-> [!NOTE]
-> For detailed syntax information, see [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
->
-### For the backup
+### Create a new database
+Before restoring your database, you might need to create a new, empty database. To do this, user that you are using must have the `CREATEDB` permission. Here are two commonly used methods:
+
+1. **Using `createdb` utility**
+ The `createdb` program allows for database creation directly from the bash command line, without the need to log into PostgreSQL or leave the operating system environment. For instance:
+
+ ```bash
+ createdb <new database name> -h <server name> -U <user name>
+ ```
+ For example, if you have a server named `mydemoserver`, a user named `myuser` and the new database you want to create is `testdb_copy`, run the following command:
-Take the backup with the `-Fc` switch, so that you can perform the restore in parallel to speed it up. For example:
+ ```bash
+ createdb testdb_copy -h mydemoserver.postgres.database.azure.com -U myuser
+ ```
+ If you're using a Single Server, your username includes the server name component. Therefore, instead of `myuser`, use `myuser@mydemoserver`.
+
+2. **Using SQL command**
+To create a database using an SQL command, you'll need to connect to your PostgreSQL server via a command line interface or a database management tool. Once connected, you can use the following SQL command to create a new database:
+
+```sql
+CREATE DATABASE <new database name>;
+```
+
+Replace `<new database name>` with the name you wish to give your new database. For example, to create a database named `testdb_copy`, the command would be:
+
+```sql
+CREATE DATABASE testdb_copy;
+```
++
+### Restoring the dump
+After you've created the target database, you can restore the data into this database from the dump file. During the restoration, log any errors to an `errors.log` file and check its content for any errors after the restore is done.
+
+#### [pg_dump & psql - using singular text file](#tab/psql)
```bash
-pg_dump -h my-source-server-name -U source-server-username -Fc -d source-databasename -f Z:\Data\Backups\my-database-backup.dump
+psql -f <database name>_dump.sql <new database name> -h <server name> -U <user name> 2> errors.log
```
-### For the restore
+For example, if you have a server named `mydemoserver`, a user named `myuser` and a new database called `testdb_copy`, run the following command:
-- Move the backup file to an Azure VM in the same region as the Azure Database for PostgreSQL server you are migrating to. Perform the `pg_restore` from that VM to reduce network latency. Create the VM with [accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) enabled.
+```bash
+psql -f testdb_dump.sql testdb_copy -h mydemoserver.postgres.database.azure.com -U myuser 2> errors.log
+```
-- Open the dump file to verify that the create index statements are after the insert of the data. If it isn't the case, move the create index statements after the data is inserted. This should already be done by default, but it's a good idea to confirm. -- Restore with the `-j N` switch (where `N` represents the number) to parallelize the restore. The number you specify is the number of cores on the target server. You can also set to twice the number of cores of the target server to see the impact.
+#### [pg_dump & pg_restore - using multiple cores](#tab/pgrestore)
+```bash
+pg_restore -Fd -j <number of cores> -d <new database name> <database name>.dump -h <server name> -U <user name> 2> errors.log
+```
- Here's an example for how to use this `pg_restore` for Single Server:
+In these commands, the `-j` option stands for the number of cores you wish to use for the restore process. You can adjust this number based on how many cores are available on your PostgreSQL server and how many you would like to allocate for the restore process. Feel free to change this setting depending on your server's capacity and your performance requirements.
- ```bash
- pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
- ```
+For example, if you have a server named `mydemoserver`, a user named `myuser` and a new database called `testdb_copy`, and you want to use two cores for the dump, run the following command:
- Here's an example for how to use this `pg_restore` for Flexible Server:
+```bash
+pg_restore -Fd -j 2 -d testdb_copy testdb.dump -h mydemoserver.postgres.database.azure.com -U myuser 2> errors.log
+```
- ```bash
- pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
- ```
++
+## Post-restoration check
+After the restoration process is complete, it's important to review the `errors.log` file for any errors that may have occurred. This step is crucial for ensuring the integrity and completeness of the restored data. Address any issues found in the log file to maintain the reliability of your database.
-- You can also edit the dump file by adding the command `set synchronous_commit = off;` at the beginning, and the command `set synchronous_commit = on;` at the end. Not turning it on at the end, before the apps change the data, might result in subsequent loss of data. -- On the target Azure Database for PostgreSQL server, consider doing the following before the restore:
-
- - Turn off query performance tracking. These statistics aren't needed during the migration. You can do this by setting `pg_stat_statements.track`, `pg_qs.query_capture_mode`, and `pgms_wait_sampling.query_capture_mode` to `NONE`.
- - Use a high compute and high memory SKU, like 32 vCore Memory Optimized, to speed up the migration. You can easily scale back down to your preferred SKU after the restore is complete. The higher the SKU, the more parallelism you can achieve by increasing the corresponding `-j` parameter in the `pg_restore` command.
+## Optimize the migration process
- - More IOPS on the target server might improve the restore performance. You can provision more IOPS by increasing the server's storage size. This setting isn't reversible, but consider whether a higher IOPS would benefit your actual workload in the future.
+When working with large databases, the dump and restore process can be lengthy and may require optimization to ensure efficiency and reliability. It's important to be aware of the various factors that can impact the performance of these operations and to take steps to optimize them.
-Remember to test and validate these commands in a test environment before you use them in production.
+For detailed guidance on optimizing the dump and restore process, refer to the [Best practices for pg_dump and pg_restore](../flexible-server/how-to-pgdump-restore.md) article. This resource provides comprehensive information and strategies that can be beneficial for handling large databases.
## Next steps--- To migrate a PostgreSQL database by using export and import, see [Migrate your PostgreSQL database using export and import](how-to-migrate-using-export-and-import.md).
+- [Best practices for pg_dump and pg_restore](../flexible-server/how-to-pgdump-restore.md).
- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql How To Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-export-and-import.md
- Title: Migrate a database - Azure Database for PostgreSQL - Single Server
-description: Describes how extract a PostgreSQL database into a script file and import the data into the target database from that file.
----- Previously updated : 09/22/2020--
-# Migrate your PostgreSQL database using export and import
--
-You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file. If you want to migrate all the databases, you can use [pg_dumpall](https://www.postgresql.org/docs/current/app-pg-dumpall.html) to dump all the databases into one script file.
-
-## Prerequisites
-To step through this how-to guide, you need:
-- An [Azure Database for PostgreSQL server](../single-server/quickstart-create-server-database-portal.md) with firewall rules to allow access and database under it.-- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) command-line utility installed-- [psql](https://www.postgresql.org/docs/current/static/app-psql.html) command-line utility installed-
-Follow these steps to export and import your PostgreSQL database.
-
-## Create a script file using pg_dump that contains the data to be loaded
-To export your existing PostgreSQL database on-premises or in a VM to a sql script file, run the following command in your existing environment:
-
-```bash
-pg_dump --host=<host> --username=<name> --dbname=<database name> --file=<database>.sql
-```
-For example, if you have a local server and a database called **testdb** in it:
-```bash
-pg_dump --host=localhost --username=masterlogin --dbname=testdb --file=testdb.sql
-```
-
-## Import the data on target Azure Database for PostgreSQL
-You can use the psql command line and the --dbname parameter (-d) to import the data into the Azure Database for PostgreSQL server and load data from the sql file.
-
-```bash
-psql --file=<database>.sql --host=<server name> --port=5432 --username=<user> --dbname=<target database name>
-```
-This example uses psql utility and a script file named **testdb.sql** from previous step to import data into the database **mypgsqldb** on the target server **mydemoserver.postgres.database.azure.com**.
-
-For **Single Server**, use this command
-```bash
-psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb
-```
-
-For **Flexible Server**, use this command
-```bash
-psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin --dbname=mypgsqldb
-```
-----
-## Next steps
-- To migrate a PostgreSQL database using dump and restore, see [Migrate your PostgreSQL database using dump and restore](how-to-migrate-using-dump-and-restore.md).-- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).-
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
Azure services are presented in the following tables by category. Note that some
As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and strategic. Service categories are assigned at general availability. Often, services start their lifecycle as a strategic service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists strategic services. > [!div class="mx-tableFixed"]
-> | ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic |
+> | ![An icon that signifies this service is strategic.](media/icon-strategic.svg)
+
+> Strategic |
> || > | Azure API for FHIR | > | Azure Analysis Services |
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Remote Rendering | > | Azure SignalR Service | > | Azure Spatial Anchors |
-> | Azure Spring Cloud |
+> | Azure Spring Apps |
> | Azure Storage: Archive Storage | > | Azure Synapse Analytics | > | Azure Ultra Disk Storage |
As mentioned previously, Azure classifies services into three categories: founda
> | Virtual Machines: NDv2-series | > | Virtual Machines: NP-series | > | Virtual Machines: NVv3-series |
-> | Virtual Machines: NVv4-series |
+> | Virtual Machines: NVv4-series |
> | Virtual Machines: SAP HANA on Azure Large Instances | + Older generations of services or virtual machines aren't listed. For more information, see [Previous generations of virtual machine sizes](../virtual-machines/sizes-previous-gen.md). To learn more about preview services that aren't yet in general availability and to see a listing of these services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). For a complete listing of services that support availability zones, see [Azure services that support availability zones](availability-zones-service-support.md).
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
Last updated 03/05/2023
-+ # Configure the control plane
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Last updated 12/1/2022
-+ # Use SAP Deployment Automation Framework from Azure DevOps Services
sap Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/troubleshooting.md
Last updated 12/05/2023
-+ # Troubleshooting the SAP Deployment Automation Framework
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
Last updated 12/15/2023
+ # Tutorial: Enterprise scale for SAP Deployment Automation Framework
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
The following tables show gap analyses for the log types that currently rely on
### Windows logs
-|Log type / Support |Azure Monitor agent support |Log Analytics agent support |
-||||
-|**Security Events** | [Windows Security Events data connector](data-connectors/windows-security-events-via-ama.md) | [Windows Security Events data connector (Legacy)](data-connectors/security-events-via-legacy-agent.md) |
-|**Filtering by security event ID** | [Windows Security Events data connector (AMA)](data-connectors/windows-security-events-via-ama.md) | - |
-|**Filtering by event ID** | Collection only | - |
+| Log type / Support | Azure Monitor agent support | Log Analytics agent support |
+| | | |
+| **Security Events** | [Windows Security Events data connector](data-connectors/windows-security-events-via-ama.md) | [Windows Security Events data connector (Legacy)](data-connectors/security-events-via-legacy-agent.md) |
+| **Filtering by security event ID** | [Windows Security Events data connector (AMA)](data-connectors/windows-security-events-via-ama.md) | - |
+| **Filtering by event ID** | Collection only | - |
|**Windows Event Forwarding** | [Windows Forwarded Events](data-connectors/windows-forwarded-events.md) | - | |**Windows Firewall Logs** | - | [Windows Firewall data connector](data-connectors/windows-firewall.md) | |**Performance counters** | Collection only | Collection only |
-|**Windows Event Logs** | Collection only | Collection only |
+| **Windows (System) Event Logs** | Collection only | Collection only |
|**Custom logs (text)** | Collection only | Collection only | |**IIS logs** | Collection only | Collection only | |**Multi-homing** | Collection only | Collection only |
-|**Application and service logs** | - | Collection only |
-|**Sysmon** | Collection only | Collection only |
+| **Application and service logs** | Collection only | Collection only |
+| **Sysmon** | Collection only | Collection only |
|**DNS logs** | [Windows DNS servers via AMA connector](connect-dns-ama.md) (Public preview) | [Windows DNS Server connector](data-connectors/dns.md) (Public preview) | > [!IMPORTANT]
sentinel Deploy Data Connector Agent Container Other Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container-other-methods.md
Title: Microsoft Sentinel solution for SAP® applications - deploy and configure the SAP data connector agent container
-description: This article shows you how to manually deploy the container that hosts the SAP data connector agent. You do this to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
--
+ Title: Microsoft Sentinel solution for SAP® applications - manually deploy and configure the SAP data connector agent container using the command line
+description: This article shows you how to manually deploy the container that hosts the SAP data connector agent, using the Azure command line interface, in order to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
++ Previously updated : 01/18/2023 Last updated : 01/03/2024
-# Deploy and configure the container hosting the SAP data connector agent via the command line
+# Manually deploy and configure the container hosting the SAP data connector agent
-This article shows you how to use various methods to deploy the container that hosts the SAP data connector agent, and create new SAP systems under the agent. You do this to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
+This article shows you how to use the Azure command line interface to deploy the container that hosts the SAP data connector agent, and create new SAP systems under the agent. You use this connector agent to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
-This article shows you how to deploy the container and create SAP systems via managed identity, a registered application, a configuration file, or directly on the VM. Alternatively, you can [deploy the data connector agent via the UI](deploy-data-connector-agent-container.md) (Preview).
+Other ways to deploy the container and create SAP systems using the Azure portal or a *kickstart* script are described in [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md). These other methods make use of an Azure Key Vault to store SAP credentials, and are highly preferred over the method described here. You should use the manual deployment method only if none of the other options are available to you.
## Deployment milestones
Deployment of the Microsoft Sentinel Solution for SAP is divided into the follow
Read about the [deployment process](deploy-data-connector-agent-container.md#data-connector-agent-deployment-overview).
-## Deploy the data connector agent container
+## Prerequisites
-# [Managed identity](#tab/managed-identity)
+Read about the [prerequisites for deploying the agent container](deploy-data-connector-agent-container.md#prerequisites).
-1. Run the following command to **Create a VM** in Azure (substitute actual names for the `<placeholders>`):
-
- ```azurecli
- az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity --role <role name> --scope <subscription Id>
-
- ```
-
- For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).
-
- > [!IMPORTANT]
- > After the VM is created, be sure to apply any security requirements and hardening procedures applicable in your organization.
- >
-
- The command above will create the VM resource, producing output that looks like this:
-
- ```json
- {
- "fqdns": "",
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.Compute/virtualMachines/vmname",
- "identity": {
- "systemAssignedIdentity": "yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
- "userAssignedIdentities": {}
- },
- "location": "westeurope",
- "macAddress": "00-11-22-33-44-55",
- "powerState": "VM running",
- "privateIpAddress": "192.168.136.5",
- "publicIpAddress": "",
- "resourceGroup": "resourcegroupname",
- "zones": ""
- }
- ```
-
-1. Copy the **systemAssignedIdentity** GUID, as it will be used in the coming steps.
-
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step:
-
- ```azurecli
- az keyvault create \
- --name <KeyVaultName> \
- --resource-group <KeyVaultResourceGroupName>
- ```
-
-1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these when you run the deployment script in the coming steps.
-
-1. Run the following command to **assign a key vault access policy** to the VM's system-assigned identity that you copied above (substitute actual names for the `<placeholders>`):
-
- ```azurecli
- az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --object-id <VM system-assigned identity> --secret-permissions get list set
- ```
-
- This policy will allow the VM to list, read, and write secrets from/to the key vault.
-
-1. **Sign in to the newly created machine** with a user with sudo privileges.
-
-1. **Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download)** to the machine on which you want to install the agent.
-
-1. **Download and run the deployment Kickstart script**:
- For public cloud, the command is:
- ```bash
- wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh
- ```
- For Microsoft Azure operated by 21Vianet, the command is:
- ```bash
- wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh --cloud mooncake
- ```
- For Azure Government - US, the command is:
- ```bash
- wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh --cloud fairfax
- ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
-
-2. **Follow the on-screen instructions** to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
-
- ```bash
- The process has been successfully completed, thank you!
- ```
-
- Note the Docker container name in the script output. You'll use it in the next step.
-
-3. Run the following command to **configure the Docker container to start automatically**.
-
- ```bash
- docker update --restart unless-stopped <container-name>
- ```
-
- To view a list of the available containers use the command: `docker ps -a`.
-
-# [Registered application](#tab/registered-application)
-
-1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-
-1. Run the following command to **create and register an application**:
-
- ```azurecli
- az ad sp create-for-rbac
- ```
-
- The command above will create the application, producing output that looks like this:
-
- ```json
- {
- "appId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
- "displayName": "azure-cli-2022-01-28-17-59-06",
- "password": "ssssssssssssssssssssssssssssssssss",
- "tenant": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
- }
- ```
-
-1. Copy the **appId**, **tenant**, and **password** from the output. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
-
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step :
-
- ```azurecli
- az keyvault create \
- --name <KeyVaultName> \
- --resource-group <KeyVaultResourceGroupName>
- ```
-
-1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
-
-1. Run the following command to **assign a key vault access policy** to the registered application ID that you copied above (substitute actual names or values for the `<placeholders>`):
-
- ```azurecli
- az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --spn <appId> --secret-permissions get list set
- ```
-
- For example:
-
- ```azurecli
- az keyvault set-policy -n sentinelkeyvault -g sentinelresourcegroup --application-id aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --secret-permissions get list set
- ```
-
- This policy will allow the VM to list, read, and write secrets from/to the key vault.
-
-1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**:
-
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
- chmod +x ./sapcon-sentinel-kickstart.sh
- ```
-
-1. **Run the script**, specifying the application ID, secret (the "password"), tenant ID, and key vault name that you copied in the previous steps.
-
- ```bash
- ./sapcon-sentinel-kickstart.sh --keymode kvsi --appid aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --appsecret ssssssssssssssssssssssssssssssssss -tenantid bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb -kvaultname <key vault name>
- ```
-
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
-
-1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
-
- ```bash
- The process has been successfully completed, thank you!
- ```
-
- Note the Docker container name in the script output. You'll use it in the next step.
-
-1. Run the following command to **configure the Docker container to start automatically**.
-
- ```bash
- docker update --restart unless-stopped <container-name>
- ```
-
- To view a list of the available containers use the command: `docker ps -a`.
-
-# [Configuration file](#tab/config-file)
-
-1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-
-1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**:
-
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
- chmod +x ./sapcon-sentinel-kickstart.sh
- ```
-
-1. **Run the script**:
-
- ```bash
- ./sapcon-sentinel-kickstart.sh --keymode cfgf
- ```
-
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
-
-1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
-
- ```bash
- The process has been successfully completed, thank you!
- ```
-
- Note the Docker container name in the script output. You'll use it in the next step.
-
-1. Run the following command to **configure the Docker container to start automatically**.
-
- ```bash
- docker update --restart unless-stopped <container-name>
- ```
-
- To view a list of the available containers use the command: `docker ps -a`.
-
-# [Manual deployment](#tab/deploy-manually)
+## Deploy the data connector agent container manually
1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent. 1. Install [Docker](https://www.docker.com/) on the VM, following the [recommended deployment steps](https://docs.docker.com/engine/install/) for the chosen operating system.
-1. Use the following commands (replacing `<SID>` with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.ini file into that folder.
+1. Use the following commands (replacing `<SID>` with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.json file (for older versions use the systemconfig.ini file) into that folder.
```bash sid=<SID> mkdir -p /opt/sapcon/$sid cd /opt/sapcon/$sid
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.json
+ ```
+ For agent versions released before June 22, 2023, use systemconfig.ini instead of systemconfig.json. Substitute the following line for the last line in the previous code block.
+
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
``` 1. Edit the systemconfig.ini file to [configure the relevant settings](reference-systemconfig.md).
Read about the [deployment process](deploy-data-connector-agent-container.md#dat
docker start sapcon-$sid ``` -
+<!-- -->
## Next steps
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Title: Microsoft Sentinel solution for SAP® applications - deploy and configure the SAP data connector agent container (via UI)
-description: This article shows you how to use the UI to deploy the container that hosts the SAP data connector agent. You do this to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
--
+ Title: Microsoft Sentinel solution for SAP® applications - deploy and configure the SAP data connector agent container
+description: This article shows you how to use the Azure portal to deploy the container that hosts the SAP data connector agent, in order to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
++ Previously updated : 01/18/2023 Last updated : 01/02/2024
-# Deploy and configure the container hosting the SAP data connector agent (via UI)
+# Deploy and configure the container hosting the SAP data connector agent
-This article shows you how to deploy the container that hosts the SAP data connector agent. You do this to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel solution for SAP® applications.
+This article shows you how to deploy the container that hosts the SAP data connector agent, and how to use it to create connections to your SAP systems. This two-step process is required to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel solution for SAP® applications.
-This article shows you how to deploy the container and create SAP systems via the UI. Also see [this video](https://www.youtube.com/watch?v=bg0vmUvcQ5Q) that shows the agent deployment process via the UI.
+The recommended method to deploy the container and create connections to SAP systems is via the Azure portal. This method is explained in the article, and also demonstrated in [this video on YouTube](https://www.youtube.com/watch?v=bg0vmUvcQ5Q). Also shown in this article is a way to accomplish these objectives by calling a *kickstart* script from the command line.
-Alternatively, you can [deploy the data connector agent using other methods](deploy-data-connector-agent-container-other-methods.md): Managed identity, a registered application, a configuration file, or directly on the VM.
+Alternatively, you can deploy the data connector agent manually by issuing individual commands from the command line, as described in [this article](deploy-data-connector-agent-container-other-methods.md).
> [!IMPORTANT]
-> Deploying the container and creating SAP systems via the UI is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Deploying the container and creating connections to SAP systems via the Azure portal is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Deployment milestones
-Deployment of the Microsoft Sentinel solution for SAP® applications is divided into the following sections
+Deployment of the Microsoft Sentinel solution for SAP® applications is divided into the following sections:
1. [Deployment overview](deployment-overview.md)
Deployment of the Microsoft Sentinel solution for SAP® applications is divided
For the Microsoft Sentinel solution for SAP® applications to operate correctly, you must first get your SAP data into Microsoft Sentinel. To accomplish this, you need to deploy the solution's SAP data connector agent.
-The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. We recommend that you install and configure this container using a *kickstart* script; however, you can choose to [deploy the container manually](deploy-data-connector-agent-container-other-methods.md?tabs=deploy-manually#deploy-the-data-connector-agent-container).
+The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. We recommend that you install and configure this container using the Azure portal (in PREVIEW); however, you can choose to deploy the container using a *kickstart* script, or to [deploy the container manually](deploy-data-connector-agent-container-other-methods.md#deploy-the-data-connector-agent-container-manually).
-The agent connects to your SAP system to pull logs and other data from it, then sends those logs to your Microsoft Sentinel workspace. To do this, the agent has to authenticate to your SAP system - that's why you created a user and a role for the agent in your SAP system in the previous step.
+The agent connects to your SAP system to pull logs and other data from it, then sends those logs to your Microsoft Sentinel workspace. To do this, the agent has to authenticate to your SAP system&mdash;that's why you created a user and a role for the agent in your SAP system in the previous step.
-Your SAP authentication mechanism, and where you deploy your VM, will determine how and where your agent configuration information, including your SAP authentication secrets, is stored. These are the options, in descending order of preference:
+You have a few choices of how and where to store your agent configuration information, including your SAP authentication secrets. The decision of which one to use can be affected by where you deploy your VM and by which SAP authentication mechanism you decide to use. These are the options, in descending order of preference:
- An **Azure Key Vault**, accessed through an Azure **system-assigned managed identity** - An **Azure Key Vault**, accessed through a Microsoft Entra ID **registered-application service principal** - A plaintext **configuration file**
-If your SAP authentication is done using SNC and X.509 certificates, your only option is to use a configuration file. Select the [**Configuration file** tab below](deploy-data-connector-agent-container-other-methods.md?tabs=config-file#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container.
+For any of these scenarios, you have the extra option to authenticate using SAP's Secure Network Communication (SNC) and X.509 certificates. This option provides a higher level of authentication security, but it's only a practical option in a limited set of scenarios.
-If you're not using SNC, then your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
+Ideally, your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
-- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault. Select the [**Managed identity** tab](deploy-data-connector-agent-container-other-methods.md?tabs=managed-identity#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container using managed identity.
+- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault. Select the [**Managed identity** tab](deploy-data-connector-agent-container.md?tabs=managed-identity#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container using managed identity.
- In the event that a system-assigned managed identity can't be used, the container can also authenticate to Azure Key Vault using an [Microsoft Entra registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md), or, as a last resort, a configuration file.
+ In the event that a system-assigned managed identity can't be used, the container can also authenticate to Azure Key Vault using a [Microsoft Entra ID registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md), or, as a last resort, a [**configuration file**](deploy-data-connector-agent-container.md?tabs=config-file#deploy-the-data-connector-agent-container).
-- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using an [Microsoft Entra registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md). Select the [**Registered application** tab below](deploy-data-connector-agent-container-other-methods.md?tabs=registered-application#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container.
+- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using a [Microsoft Entra ID registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md). Select the [**Registered application** tab below](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container.
- If for some reason a registered-application service principal can't be used, you can use a configuration file, though this is not preferred.
+- If for some reason a registered-application service principal can't be used, you can use a [**configuration file**](reference-systemconfig.md), though this is not preferred.
-## Deploy the data connector agent container via the UI
+## Prerequisites
-In this section, you deploy the data connector agent. After you deploy the agent, you configure the agent to [connect to an SAP system](#connect-to-a-new-sap-system).
+Before you deploy the data connector agent, make sure you have done the following:
-### Prerequisites
+- Follow the [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md).
+- If you plan to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), [take the preparatory steps for deploying the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md).
+- Set up a Key Vault, using either a [managed identity](deploy-data-connector-agent-container.md?tabs=managed-identity#create-key-vault) or a [registered application](deploy-data-connector-agent-container.md?tabs=registered-application#create-key-vault) (links are to the procedures shown below). Make sure you have the necessary permissions.
+ - If your circumstances do not allow for using Azure Key Vault, create a [**configuration file**](reference-systemconfig.md) to use instead.
+- For more information on these options, see the [overview section](#data-connector-agent-deployment-overview).
-- Follow the [Microsoft Sentinel Solution for SAP deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md).-- If you plan to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), [deploy the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md).-- Set up a [managed identity](#managed-identity) or a [registered application](#registered-application). For more information on these options, see the [overview section](#data-connector-agent-deployment-overview).
+## Deploy the data connector agent container
-#### Managed identity
+This section has three steps:
+- In the first step, you [create the virtual machine and set up your access to your SAP system credentials](#create-virtual-machine-and-configure-access-to-your-credentials). (This step may need to be performed by other appropriate personnel, but it must be done first. See [Prerequisites](#prerequisites).)
+- In the second step, you [set up and deploy the data connector agent](#deploy-the-data-connector-agent).
+- In the third step, you configure the agent to [connect to an SAP system](#connect-to-a-new-sap-system).
-1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
+### Create virtual machine and configure access to your credentials
+
+# [Managed identity](#tab/managed-identity)
-1. Run the following command to **Create a VM** in Azure (substitute actual names for the `<placeholders>`):
+#### Create a managed identity with an Azure VM
+
+1. Run the following command to **Create a VM** in Azure (substitute actual names from your environment for the `<placeholders>`):
```azurecli az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity --role <role name> --scope <subscription Id>
In this section, you deploy the data connector agent. After you deploy the agent
} ```
-1. Copy the **systemAssignedIdentity** GUID, as it will be used in the coming steps.
-
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step:
+1. Copy the **systemAssignedIdentity** GUID, as it will be used in the coming steps. This is your **managed identity**.
- ```azurecli
- az keyvault create \
- --name <KeyVaultName> \
- --resource-group <KeyVaultResourceGroupName>
- ```
+# [Registered application](#tab/registered-application)
-1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these when you run the deployment script in the coming steps.
+#### Register an application to create an application identity
-1. Run the following command to **assign a key vault access policy** to the VM's system-assigned identity that you copied above (substitute actual names for the `<placeholders>`):
-
- ```azurecli
- az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --object-id <VM system-assigned identity> --secret-permissions get list set
- ```
-
- This policy will allow the VM to list, read, and write secrets from/to the key vault.
-
-#### Registered application
-
-1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-
-1. Run the following command to **create and register an application**:
+1. Run the following command from the Azure command line to **create and register an application**:
```azurecli az ad sp create-for-rbac
In this section, you deploy the data connector agent. After you deploy the agent
1. Copy the **appId**, **tenant**, and **password** from the output. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step:
+1. Before proceeding any further, create a virtual machine on which to deploy the agent. You can create this machine in Azure, in another cloud, or on-premises.
+
+# [Configuration file](#tab/config-file)
+
+#### Create a configuration file
+
+Key Vault is the recommended method to store your authentication credentials and configuration data.
+
+If you are prevented from using Azure Key Vault, you can use a configuration file instead. See the appropriate reference file:
+
+- [Systemconfig.ini file reference](reference-systemconfig.md) (for agent versions deployed before June 22, 2023).
+- [Systemconfig.json file reference](reference-systemconfig-json.md) (for versions deployed June 22 or later).
+
+Once you have the file prepared, but before proceeding any further, create a virtual machine on which to deploy the agent. Then, skip the Key Vault steps below and go directly to the step after them&mdash;[Deploy the data connector agent](#deploy-the-data-connector-agent).
+++
+#### Create Key Vault
+
+1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`):
+ (If you'll be using an existing key vault, ignore this step.)
```azurecli az keyvault create \ --name <KeyVaultName> \ --resource-group <KeyVaultResourceGroupName>
- ```
+ ```
-1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
+1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these when you assign the key vault access policy and run the deployment script in the coming steps.
-1. Run the following command to **assign a key vault access policy** to the registered application ID that you copied above (substitute actual names or values for the `<placeholders>`):
+#### Assign a key vault access policy
+
+1. Run the following command to **assign a key vault access policy** to the identity that you created and copied above (substitute actual names for the `<placeholders>`). Choose the appropriate tab for the type of identity you created to see the relevant command.
+
+ # [Managed identity](#tab/managed-identity)
+
+ Run this command to assign the access policy to your VM's **system-assigned managed identity**:
```azurecli
- az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --spn <appId> --secret-permissions get list set
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --object-id <VM system-assigned identity> --secret-permissions get list set
```
- For example:
+ This policy will allow the VM to list, read, and write secrets from/to the key vault.
+
+ # [Registered application](#tab/registered-application)
+
+ Run this command to assign the access policy to a **registered application identity**:
```azurecli
- az keyvault set-policy -n sentinelkeyvault -g sentinelresourcegroup --application-id aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --secret-permissions get list set
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --spn <appId> --secret-permissions get list set
``` This policy will allow the VM to list, read, and write secrets from/to the key vault.
+ # [Configuration file](#tab/config-file)
+
+ Move on, nothing to see here...
+
+
++ ### Deploy the data connector agent
-1. From the Microsoft Sentinel portal, select **Data connectors**.
-1. In the search bar, type *Microsoft Sentinel for SAP*.
-1. Select the **Microsoft Sentinel for SAP** connector and select **Open connector**.
+Now that you've created a VM and a Key Vault, your next step is to create a new agent and connect to one of your SAP systems.
- You create an agent and SAP system under the **Configuration > Add an API based collector agent** area.
-
- :::image type="content" source="media/deploy-data-connector-agent-container/configuration-new-agent.png" alt-text="Screenshot of the Configuration > Add an API based collector agent area of the SAP data connector page." lightbox="media/deploy-data-connector-agent-container/configuration-new-agent.png":::
+1. **Sign in to the newly created VM** on which you are installing the agent, as a user with sudo privileges.
+
+1. **Download or transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download)** to the machine.
+
+# [Azure portal (Preview)](#tab/azure-portal/managed-identity)
+
+> [!NOTE]
+> If you previously installed SAP connector agents manually or using the kickstart scripts, you can't configure or manage those agents in the Azure portal. If you want to use the portal to configure and update agents, you must reinstall your existing agents using the portal.
+
+Create a new agent through the Azure portal, authenticating with a managed identity:
-1. Deploy the agent. To add a system, you must add an agent first.
+1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
+
+1. In the search bar, type *SAP*.
+
+1. Select **Microsoft Sentinel for SAP** from the search results, and select **Open connector page**.
+
+1. To collect data from an SAP system, you must follow these two steps:
1. [Create a new agent](#create-a-new-agent) 1. [Connect the agent to a new SAP system](#connect-to-a-new-sap-system)
In this section, you deploy the data connector agent. After you deploy the agent
#### Create a new agent 1. In the **Configuration** area, select **Add new agent (Preview)**.
-
- :::image type="content" source="media/deploy-data-connector-agent-container/create-agent.png" alt-text="Screenshot of the Create a collector agent area.":::
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/configuration-new-agent.png" alt-text="Screenshot of the instructions to add an SAP API-based collector agent." lightbox="media/deploy-data-connector-agent-container/configuration-new-agent.png":::
1. Under **Create a collector agent** on the right, define the agent details:
- - Type the agent name. The agent name can include these characters:
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/create-agent-managed-id.png" alt-text="Screenshot of the Create a collector agent area.":::
+
+ - Enter the **Agent name**. The agent name can include these characters:
- a-z - A-Z - 0-9
- - _
- - .
- - \-
- - Select the subscription and key vault.
- - Under **NWRFC SDK zip file path on the agent VM**, type a path that contains the SAP NetWeaver Remote Function Call (RFC), Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*.
- - To ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), select **Enable SNC connection support**. If you select this option, under **SAP Cryptographic Library path on the agent VM**, provide the path that contains the `sapgenpse` binary and `libsapcrypto.so` library.
+ - _ (underscore)
+ - . (period)
+ - \- (dash)
+
+ - Select the **Subscription** and **Key Vault** from their respective drop-downs.
+
+ - Under **NWRFC SDK zip file path on the agent VM**, type the path in your VM that contains the SAP NetWeaver Remote Function Call (RFC) Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*.
+
+ - To ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), select **Enable SNC connection support**. If you select this option, enter the path that contains the `sapgenpse` binary and `libsapcrypto.so` library, under **SAP Cryptographic Library path on the agent VM**.
- > [!NOTE]
- > Make sure that you select **Enable SNC connection support** at this stage if you want to use an SNC connection. You can't go back and enable an SNC connection after you finish deploying the agent.
+ > [!NOTE]
+ > Make sure that you select **Enable SNC connection support** at this stage if you want to use an SNC connection. You can't go back and enable an SNC connection after you finish deploying the agent.
- Learn more about [deploying the connector over a SNC connection](configure-snc.md).
+ Learn more about [deploying the connector over a SNC connection](configure-snc.md).
- - To deploy the container and create SAP systems via managed identity, leave the default option **Managed Identity**, selected. To deploy the container and create SAP systems via a registered application, select **Application Identity**. You set up the managed identity or registered application (application identity) in the [prerequisites](#prerequisites).
+ - To authenticate to your key vault using a managed identity, leave the default option **Managed Identity**, selected. You must have the managed identity set up ahead of time, as mentioned in the [prerequisites](#prerequisites).
1. Select **Create** and review the recommendations before you complete the deployment: :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
-1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**.
+1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**. After you've copied the command line, select **Close**.
+
+ The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
+
+ At this stage, the agent's **Health** status is **"Incomplete installation. Please follow the instructions"**. Once the agent is installed successfully, the status changes to **Agent healthy**. This update can take up to 10 minutes.
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/installation-status.png" alt-text="Screenshot of the health statuses of API-based collector agents on the SAP data connector page." lightbox="media/deploy-data-connector-agent-container/installation-status.png":::
+
+ The table displays the agent name and health status for only those agents you deploy via the Azure portal. Agents deployed using the command line will not be displayed here.
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
-
1. In your target VM (the VM where you plan to install the agent), open a terminal and run the command you copied in the previous step.
- The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+
+ If you need to copy your command again, select **View** :::image type="content" source="media/deploy-data-connector-agent-container/view-icon.png" border="false" alt-text="Screenshot of the View icon."::: to the right of the **Health** column and copy the command next to **Agent command** on the bottom right.
+
+#### Connect to a new SAP system
+
+Anyone adding a new connection to an SAP system must have write permission to the [Key Vault where the SAP credentials are stored](#create-key-vault). See [Prerequisites](#prerequisites).
+
+1. In the **Configuration** area, select **Add new system (Preview)**.
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/create-system.png" alt-text="Screenshot of the Add new system area.":::
+
+1. Under **Select an agent**, select the [agent you created in the previous step](#create-a-new-agent).
+
+1. Under **System identifier**, select the server type and provide the server details.
+
+1. Select **Next: Authentication**.
+
+1. For basic authentication, provide the user and password. If you selected an SNC connection when you [set up the agent](#create-a-new-agent), select **SNC** and provide the certificate details.
+
+1. Select **Next: Logs**.
+
+1. Select which logs you want to pull from SAP, and select **Next: Review and create**.
+
+1. Review the settings you defined. Select **Previous** to modify any settings, or select **Deploy** to deploy the system.
+
+1. The system configuration you defined is deployed into Azure Key Vault. You can now see the system details in the table under **Configure an SAP system and assign it to a collector agent**. This table displays the associated agent name, SAP System ID (SID), and health status for systems that you added via the Azure portal or via other methods.
+
+ At this stage, the system's **Health** status is **Pending**. If the agent is updated successfully, it pulls the configuration from Azure Key vault, and the status changes to **System healthy**. This update can take up to 10 minutes.
+
+ Learn more about how to [monitor your SAP system health](../monitor-sap-system-health.md).
+
+# [Azure portal (Preview)](#tab/azure-portal/registered-application)
+
+> [!NOTE]
+> If you previously installed SAP connector agents manually or using the kickstart scripts, you can't configure or manage those agents in the Azure portal. If you want to use the portal to configure and update agents, you must reinstall your existing agents using the portal.
+
+Create a new agent through the Azure portal, authenticating with a Microsoft Entra ID registered application:
+
+1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
- At this stage, the agent's **Health** status is **Incomplete installation. Please follow the instructions**. If the agent is added successfully, the status changes to **Agent healthy**. This update can take up to 10 minutes.
+1. In the search bar, type *SAP*.
- :::image type="content" source="media/deploy-data-connector-agent-container/configuration-new-agent.png" alt-text="Screenshot of the health statuses Configuration > Add an API based collector agent area of the SAP data connector page." lightbox="media/deploy-data-connector-agent-container/configuration-new-agent.png":::
+1. Select **Microsoft Sentinel for SAP** from the search results, and select **Open connector page**.
- The table displays the agent name and health status for agents you deploy via the UI only.
+1. To collect data from an SAP system, you must follow these two steps:
+ 1. [Create a new agent](#create-a-new-agent-1)
+ 1. [Connect the agent to a new SAP system](#connect-to-a-new-sap-system-1)
+
+#### Create a new agent
+
+1. In the **Configuration** area, select **Add new agent (Preview)**.
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/configuration-new-agent.png" alt-text="Screenshot of the instructions to add an SAP API-based collector agent." lightbox="media/deploy-data-connector-agent-container/configuration-new-agent.png":::
+
+1. Under **Create a collector agent** on the right, define the agent details:
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/create-agent-app-id.png" alt-text="Screenshot of the Create a collector agent area.":::
+
+ - Enter the **Agent name**. The agent name can include these characters:
+ - a-z
+ - A-Z
+ - 0-9
+ - _ (underscore)
+ - . (period)
+ - \- (dash)
+
+ - Select the **Subscription** and **Key Vault** from their respective drop-downs.
+
+ - Under **NWRFC SDK zip file path on the agent VM**, type the path in your VM that contains the SAP NetWeaver Remote Function Call (RFC) Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*.
+
+ - To ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), select **Enable SNC connection support**. If you select this option, enter the path that contains the `sapgenpse` binary and `libsapcrypto.so` library, under **SAP Cryptographic Library path on the agent VM**.
+
+ > [!NOTE]
+ > Make sure that you select **Enable SNC connection support** at this stage if you want to use an SNC connection. You can't go back and enable an SNC connection after you finish deploying the agent.
+
+ Learn more about [deploying the connector over a SNC connection](configure-snc.md).
+
+ - To authenticate to your key vault using a registered application, select **Application Identity**. You must have the registered application (application identity) set up ahead of time, as mentioned in the [prerequisites](#prerequisites).
+
+1. Select **Create** and review the recommendations before you complete the deployment:
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
+
+1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**. After you've copied the command line, select **Close**.
+
+ The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
+
+ At this stage, the agent's **Health** status is **"Incomplete installation. Please follow the instructions"**. Once the agent is installed successfully, the status changes to **Agent healthy**. This update can take up to 10 minutes.
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/installation-status.png" alt-text="Screenshot of the health statuses of API-based collector agents on the SAP data connector page." lightbox="media/deploy-data-connector-agent-container/installation-status.png":::
+
+ The table displays the agent name and health status for only those agents you deploy via the Azure portal. Agents deployed using the command line will not be displayed here.
+
+1. In your target VM (the VM where you plan to install the agent), open a terminal and run the command you copied in the previous step.
+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+
If you need to copy your command again, select **View** :::image type="content" source="media/deploy-data-connector-agent-container/view-icon.png" border="false" alt-text="Screenshot of the View icon."::: to the right of the **Health** column and copy the command next to **Agent command** on the bottom right. #### Connect to a new SAP system
+Anyone adding a new connection to an SAP system must have write permission to the [Key Vault where the SAP credentials are stored](#create-key-vault). See [Prerequisites](#prerequisites).
+ 1. In the **Configuration** area, select **Add new system (Preview)**. :::image type="content" source="media/deploy-data-connector-agent-container/create-system.png" alt-text="Screenshot of the Add new system area."::: 1. Under **Select an agent**, select the [agent you created in the previous step](#create-a-new-agent).+ 1. Under **System identifier**, select the server type and provide the server details.+ 1. Select **Next: Authentication**.+ 1. For basic authentication, provide the user and password. If you selected an SNC connection when you [set up the agent](#create-a-new-agent), select **SNC** and provide the certificate details.+ 1. Select **Next: Logs**.+ 1. Select which logs you want to pull from SAP, and select **Next: Review and create**.
-1. Review the settings you defined. Select **Previous** to modify any settings, or select **Deploy** to deploy the system.1.
- The system configuration you defined is deployed into Azure Key Vault. You can now see the system details in the table under **Configure an SAP system and assign it to a collector agent**. This table displays the associated agent name, SAP System ID (SID), and health status for systems that you added via the UI or via other methods.
+1. Review the settings you defined. Select **Previous** to modify any settings, or select **Deploy** to deploy the system.
+
+1. The system configuration you defined is deployed into Azure Key Vault. You can now see the system details in the table under **Configure an SAP system and assign it to a collector agent**. This table displays the associated agent name, SAP System ID (SID), and health status for systems that you added via the Azure portal or via other methods.
At this stage, the system's **Health** status is **Pending**. If the agent is updated successfully, it pulls the configuration from Azure Key vault, and the status changes to **System healthy**. This update can take up to 10 minutes. Learn more about how to [monitor your SAP system health](../monitor-sap-system-health.md).
+# [Azure portal (Preview)](#tab/azure-portal/config-file)
+
+**The Azure portal can only be used with Azure Key Vault.**
+
+To use the command line to create an agent using a config file, see [these instructions](?tabs=config-file%2Ccommand-line#deploy-the-data-connector-agent).
+
+# [Command line script](#tab/command-line/managed-identity)
+
+Create a new agent using the command line, authenticating with a managed identity:
+
+1. **Download and run the deployment Kickstart script**:
+
+ For the Azure public commercial cloud, the command is:
+
+ ```bash
+ wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh
+ ```
+
+ - For Microsoft Azure operated by 21Vianet, add `--cloud mooncake` to the end of the copied command.
+
+ - For Azure Government - US, add `--cloud fairfax` to the end of the copied command.
+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+
+1. **Follow the on-screen instructions** to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
+
+ ```bash
+ The process has been successfully completed, thank you!
+ ```
+
+ Note the Docker container name in the script output. You'll use it in the next step.
+
+1. Run the following command to **configure the Docker container to start automatically**.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+ To view a list of the available containers use the command: `docker ps -a`.
+
+# [Command line script](#tab/command-line/registered-application)
+
+Create a new agent using the command line, authenticating with a Microsoft Entra ID registered application:
+
+1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**:
+
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
+ chmod +x ./sapcon-sentinel-kickstart.sh
+ ```
+
+1. **Run the script**, specifying the application ID, secret (the "password"), tenant ID, and key vault name that you copied in the previous steps.
+
+ ```bash
+ ./sapcon-sentinel-kickstart.sh --keymode kvsi --appid aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --appsecret ssssssssssssssssssssssssssssssssss -tenantid bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb -kvaultname <key vault name>
+ ```
+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+
+1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
+
+ ```bash
+ The process has been successfully completed, thank you!
+ ```
+
+ Note the Docker container name in the script output. You'll use it in the next step.
+
+1. Run the following command to **configure the Docker container to start automatically**.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+ To view a list of the available containers use the command: `docker ps -a`.
+
+# [Command line script](#tab/command-line/config-file)
+
+1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
+
+1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**:
+
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
+ chmod +x ./sapcon-sentinel-kickstart.sh
+ ```
+
+1. **Run the script**:
+
+ ```bash
+ ./sapcon-sentinel-kickstart.sh --keymode cfgf
+ ```
+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+
+1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
+
+ ```bash
+ The process has been successfully completed, thank you!
+ ```
+
+ Note the Docker container name in the script output. You'll use it in the next step.
+
+1. Run the following command to **configure the Docker container to start automatically**.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+ To view a list of the available containers use the command: `docker ps -a`.
+++ ## Next steps Once the connector is deployed, proceed to deploy Microsoft Sentinel solution for SAP® applications content: > [!div class="nextstepaction"] > [Deploy the solution content from the content hub](deploy-sap-security-content.md)+
+See this [YouTube video](https://youtu.be/FasuyBSIaQM), on the [Microsoft Security Community YouTube channel](https://www.youtube.com/@MicrosoftSecurityCommunity), for guidance on checking the health and connectivity of the SAP connector.
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
To successfully deploy the Microsoft Sentinel solution for SAP® applications, y
| **System architecture** | The data connector component of the SAP solution is deployed as a Docker container, and each SAP client requires its own container instance.<br>The container host can be either a physical machine or a virtual machine, can be located either on-premises or in any cloud. <br>The VM hosting the container ***does not*** have to be located in the same Azure subscription as your Microsoft Sentinel workspace, or even in the same Microsoft Entra tenant. | | **Virtual machine sizing recommendations** | **Minimum specification**, such as for a lab environment:<br>*Standard_B2s* VM, with:<br>- 2 cores<br>- 4 GB RAM<br><br>**Standard connector** (default):<br>*Standard_D2as_v5* VM or<br>*Standard_D2_v5* VM, with: <br>- 2 cores<br>- 8 GB RAM<br><br>**Multiple connectors**:<br>*Standard_D4as_v5* or<br>*Standard_D4_v5* VM, with: <br>- 4 cores<br>- 16 GB RAM | | **Administrative privileges** | Administrative privileges (root) are required on the container host machine. |
-| **Supported Linux versions** | The SAP data connector agent has been tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you may need to [deploy and configure the container manually](deploy-data-connector-agent-container-other-methods.md?tabs=deploy-manually#deploy-the-data-connector-agent-container) instead of using the kickstart script. |
+| **Supported Linux versions** | The SAP data connector agent has been tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you may need to [deploy and configure the container manually](deploy-data-connector-agent-container-other-methods.md#deploy-the-data-connector-agent-container-manually) instead of using the kickstart script. |
| **Network connectivity** | Ensure that the container host has access to: <br>- Microsoft Sentinel <br>- Azure key vault (in deployment scenario where Azure key vault is used to store secrets<br>- SAP system via the following TCP ports: *32xx*, *5xx13*, *33xx*, *48xx* (when SNC is used), where *xx* is the SAP instance number. | | **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list may vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/)<br><br>
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
-**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container-other-methods.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application ID.
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application ID.
#### Enterprise Application secret
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
-**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container-other-methods.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application secret.
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application secret.
#### Tenant ID
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
-**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container-other-methods.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the Microsoft Entra tenant ID.
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the Microsoft Entra tenant ID.
#### Key Vault Name
sentinel Sap Audit Controls Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-controls-workbook.md
For more information, see:
- [Monitor the health of your SAP system](../monitor-sap-system-health.md) - [Prerequisites for deploying the Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) - [Troubleshooting your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)+
+See this [YouTube video](https://youtu.be/8_2ji5afBqc), on the [Microsoft Security Community YouTube channel](https://www.youtube.com/@MicrosoftSecurityCommunity), for a demonstration of this workbook.
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
Last updated 01/09/2023
When troubleshooting your Microsoft Sentinel for SAP data connector, you may find the following commands useful:
-|Function |Command |
-|||
-|**Stop the Docker container** | `docker stop sapcon-[SID]` |
-|**Start the Docker container** |`docker start sapcon-[SID]` |
-|**View Docker system logs** | `docker logs -f sapcon-[SID]` |
-|**Enter the Docker container** | `docker exec -it sapcon-[SID] bash` |
+| Function | Command |
+| | -- |
+| **Stop the Docker container** | `docker stop sapcon-[SID]` |
+| **Start the Docker container** | `docker start sapcon-[SID]` |
+| **View Docker system logs** | `docker logs -f sapcon-[SID]` |
+| **Enter the Docker container** | `docker exec -it sapcon-[SID] bash` |
For more information, see the [Docker CLI documentation](https://docs.docker.com/engine/reference/commandline/docker/).
Connector execution logs for your Microsoft Sentinel solution for SAP® applicat
If you want to check the Microsoft Sentinel for SAP data connector configuration file and make manual updates, perform the following steps:
-1. On your VM, open the **sapcon/[SID]/systemconfig.ini** file.
+1. On your VM, open the configuration file:
+
+ - **sapcon/[SID]/systemconfig.json** for agent versions released on or after June 22, 2023.
+ - **sapcon/[SID]/systemconfig.ini** for agent versions released before June 22, 2023.
1. Update the configuration if needed, and save the file.
service-connector Tutorial Csharp Webapp Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md
Last updated 11/20/2023 ms.devlang: azurecli-+ # Tutorial: Deploy a web application connected to Azure Blob Storage with Service Connector
service-fabric Service Fabric Quickstart Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-containers.md
-+ Last updated 07/11/2022
site-recovery Vmware Azure Tutorial Prepare On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-prepare-on-premises.md
Create the account as follows:
**Task** | **Role/Permissions** | **Details** | | **VM discovery** | At least a read-only user<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Read-only | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
-**Full replication, failover, failback** | Create a role (Azure_Site_Recovery) with the required permissions, and then assign the role to a VMware user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure_Site_Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+**Full replication, failover, failback** | Create a role (Azure_Site_Recovery) with the required permissions, and then assign the role to a VMware user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure_Site_Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots, Create snapshots | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
## Prepare an account for Mobility service installation
spring-apps Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/cost-management.md
This article describes the cost-saving options and capabilities that Azure Spring Apps provides.
+## Save more on the Enterprise plan
+
+For the Enterprise plan, we now offer further discounts for longer commitments on both the Microsoft and VMware (by Broadcom) parts of the pricing. For more information, see [Azure Spring Apps pricing](https://azure.microsoft.com/pricing/details/spring-apps/).
+
+For the Microsoft part of the pricing, the Enterprise plan currently has yearly discounted pricing options available. For more information, see [Maximizing Value: Streamlined Cloud Solutions with Prime Cost Savings for Spring Apps](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/maximizing-value-streamlined-cloud-solutions-with-prime-cost/ba-p/3904599).
+
+For the VMware (by Broadcom) part of the pricing, the negotiable discount varies based on the number of years you sign up for. For more information, reach out to your sales representative.
+ ## Monthly free grants The first 50 vCPU hours and 100-GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
Autoscale reduces operating costs by terminating redundant resources when they'r
You can also set up autoscale rules for your applications in the Azure Spring Apps Standard consumption and dedicated plan. For more information, see [Quickstart: Set up autoscale for applications in the Azure Spring Apps Standard consumption and dedicated plan](quickstart-apps-autoscale-standard-consumption.md).
-## Stop maintaining unused environments
+## Stop maintaining unused environments
If you set up several environments while developing a product, it's important to remove the environments that are no longer in use once the product is live.
-## Remove unnecessary deployments
+## Remove unnecessary deployments
If you use strategies like blue-green deployment to reduce downtime, it can result in many idle deployments on staging slots, especially multiple app instances that aren't needed once newer versions are deployed to production.
-## Avoid over allocating resources
+## Avoid over allocating resources
Java users often reserve more processing power and memory than they really need. While it's fine to use large app instances during the initial months in production, you should adjust resource allocation based on usage data.
-## Avoid unnecessary scaling
+## Avoid unnecessary scaling
If you use more app instances than you need, you should adjust the number of instances based on real usage data.
-## Streamline monitoring data collection
+## Streamline monitoring data collection
If you collect more logs, metrics, and traces than you can use or afford, you must determine what's necessary for troubleshooting, capacity planning, and monitoring production. For example, you can reduce the frequency of application performance monitoring or be more selective about which logs, metrics, and traces you send to data aggregation tools.
-## Deactivate debug mode
+## Deactivate debug mode
If you forget to switch off debug mode for apps, a large amount of data is collected and sent to monitoring platforms. Forgetting to deactivate debug mode could be unnecessary and costly.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
description: Determine the level of support for each storage account feature giv
Previously updated : 11/28/2023 Last updated : 12/11/2023 # Blob Storage feature support in Azure Storage accounts
-Feature support is impacted by the type of account that you create and the settings that enable on that account. You can use the tables in this article to assess feature support based on these factors. The items that appear in these tables will change over time as support continues to expand.
+Feature support is impacted by the type of account that you create and the settings that you enable on that account. You can use the tables in this article to assess feature support based on these factors. The items that appear in these tables will change over time as support continues to expand.
## How to use these tables
The following table describes whether a feature is supported in a standard gener
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed planned failover (preview)](../common/storage-disaster-recovery-guidance.md#customer-managed-planned-failover-preview) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed failover](../common/storage-disaster-recovery-guidance.md#customer-managed-failover) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Manage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-metrics.md
ms.devlang: csharp-+ # Enable and manage Azure Storage Analytics metrics (classic)
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Previously updated : 09/21/2023 Last updated : 01/04/2024
# Change how a storage account is replicated
-Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events. This including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/) even in the face of failures.
+Azure Storage always stores multiple copies of your data to protect it in the face of both planned and unplanned events. These events include transient hardware failures, network or power outages, and massive natural disasters. Data redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/), even in the face of failures.
-In this article, you will learn how to change the replication setting(s) for an existing storage account.
+This article describes the process of changing replication setting(s) for an existing storage account.
## Options for changing the replication type
Four aspects of the redundancy configuration of a storage account determine how
- **Geo-redundancy** - replication within a single "local" region or between a primary and a secondary region (LRS vs. GRS) - **Read access (RA)** - read access to the secondary region when geo-redundancy is used (GRS vs. RA-GRS)
-For an overview of all of the redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
+For a detailed overview of all of the redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
-You can change how your storage account is replicated from any redundancy configuration to any other with some limitations. Before making any changes, review those [limitations](#limitations-for-changing-replication-types) along with the [downtime requirements](#downtime-requirements) to ensure you have a plan that will produce the best end result within a time frame that suits your needs, and that satisfies your uptime requirements.
+You can change redundancy configurations when necessary, though some configurations are subject to [limitations](#limitations-for-changing-replication-types) and [downtime requirements](#downtime-requirements). To ensure that the limitations and requirements don't affect your timeframe and uptime requirements, always review these limitations and requirements before making any changes.
There are three ways to change the replication settings: -- [Use the Azure portal, Azure PowerShell, or the Azure CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli) to add or remove geo-replication or read access to the secondary region.-- [Perform a conversion](#perform-a-conversion) to add or remove zone-redundancy.-- [Perform a manual migration](#manual-migration) in scenarios where the first two options are not supported, or to ensure the change is completed by a specific time.
+- [Add or remove geo-replication or read access](#change-the-replication-setting-using-the-portal-powershell-or-the-cli) to the secondary region.
+- [Add or remove zone-redundancy](#perform-a-conversion) by performing a conversion.
+- [Perform a manual migration](#manual-migration) in scenarios where the first two options aren't supported, or to ensure the change is completed within a specific timeframe.
-If you want to change both zone-redundancy and either geo-replication or read-access, a two-step process is required. Geo-redundancy and read-access can be changed at the same time, but the zone-redundancy conversion must be performed separately. These steps can be performed in any order.
+Geo-redundancy and read-access can be changed at the same time. However, any change that also involves zone-redundancy requires a conversion and must be performed separately using a two-step process. These two steps can be performed in any order.
### Replication change table
-The following table provides an overview of how to switch from each type of replication to another.
+The following table provides an overview of how to switch between replication types.
> [!NOTE]
-> Manual migration is an option for any scenario in which you want to change the replication setting within the [limitations for changing replication types](#limitations-for-changing-replication-types). The manual migration option has been omitted from the table below to simplify it.
+> Manual migration is an option for any scenario in which you want to change the replication setting within the [limitations for changing replication types](#limitations-for-changing-replication-types). The manual migration option is excluded from the following table for simplification.
| Switching | …to LRS | …to GRS/RA-GRS <sup>6</sup> | …to ZRS | …to GZRS/RA-GZRS <sup>2,6</sup> | |--|-||-||
The following table provides an overview of how to switch from each type of repl
<sup>1</sup> [Adding geo-redundancy incurs a one-time egress charge](#costs-associated-with-changing-how-data-is-replicated).<br /> <sup>2</sup> If your storage account contains blobs in the archive tier, review the [access tier limitations](#access-tier) before changing the redundancy type to geo- or zone-redundant.<br />
-<sup>3</sup> The type of conversion supported depends on the storage account type. See [the storage account table](#storage-account-type) for more details.<br />
-<sup>4</sup> Conversion to ZRS or GZRS for an LRS account resulting from a failover is not supported. For more details see [Failover and failback](#failover-and-failback).<br />
-<sup>5</sup> Converting from LRS to ZRS is [not supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares](#protocol-support). <br />
-<sup>6</sup> Even though enabling geo-redundancy appears to occur instantaneously, failover to the secondary region cannot be initiated until data synchronization between the two regions has completed.<br />
+<sup>3</sup> The type of conversion supported depends on the storage account type. For more information, see the [storage account table](#storage-account-type).<br />
+<sup>4</sup> Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported. For more information, see [Failover and failback](#failover-and-failback).<br />
+<sup>5</sup> Converting from LRS to ZRS [isn't supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares](#protocol-support). <br />
+<sup>6</sup> Even though enabling geo-redundancy appears to occur instantaneously, failover to the secondary region can't be initiated until data synchronization between the two regions is complete.<br />
## Change the replication setting
-Depending on your scenario from the [replication change table](#replication-change-table), use one of the methods below to change your replication settings.
+Depending on your scenario from the [replication change table](#replication-change-table), use one of the following methods to change your replication settings.
### Change the replication setting using the portal, PowerShell, or the CLI
-In most cases you can use the Azure portal, PowerShell, or the Azure CLI to change the geo-redundant or read access (RA) replication setting for a storage account. If you are initiating a zone redundancy conversion, you can change the setting from within the Azure portal, but not from PowerShell or the Azure CLI.
+In most cases you can use the Azure portal, PowerShell, or the Azure CLI to change the geo-redundant or read access (RA) replication setting for a storage account.
-Changing how your storage account is replicated in the Azure portal does not result in down time for your applications, including changes that require a conversion.
+Changing how your storage account is replicated in the Azure portal doesn't result in down time for your applications, including changes that require a conversion.
# [Portal](#tab/portal)
To change the redundancy option for your storage account in the Azure portal, fo
1. Update the **Redundancy** setting. 1. Select **Save**.
- :::image type="content" source="media/redundancy-migration/change-replication-option.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="media/redundancy-migration/change-replication-option.png":::
+ :::image type="content" source="media/redundancy-migration/change-replication-option-sml.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="media/redundancy-migration/change-replication-option.png":::
# [PowerShell](#tab/powershell)
-To change the redundancy option for your storage account with PowerShell, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command and specify the `-SkuName` parameter:
+You can use Azure PowerShell to change the redundancy options for your storage account.
+
+To change between locally redundant and geo-redundant storage, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) cmdlet and specify the `-SkuName` parameter.
```powershell Set-AzStorageAccount -ResourceGroupName <resource_group> `
Set-AzStorageAccount -ResourceGroupName <resource_group> `
-SkuName <sku> ```
+You can also add or remove zone redundancy to your storage account. To change between locally redundant and zone-redundant storage with PowerShell, call the [Start-AzStorageAccountMigration](/powershell/module/az.storage/start-azstorageaccountmigration) command and specify the `-TargetSku` parameter:
+
+```powershell
+Start-AzStorageAccountMigration
+ -AccountName <String>
+ -ResourceGroupName <String>
+ -TargetSku <String>
+ -AsJob
+```
+
+To track the current migration status of the conversion initiated on your storage account, call the [Get-AzStorageAccountMigration](/powershell/module/az.storage/get-azstorageaccountmigration) cmdlet:
+
+```powershell
+Get-AzStorageAccountMigration
+ -AccountName <String>
+ -ResourceGroupName <String>
+```
+ # [Azure CLI](#tab/azure-cli)
-To change the redundancy option for your storage account with Azure CLI, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and specify the `--sku` parameter:
+You can use the Azure CLI to change the redundancy options for your storage account.
+
+To change between locally redundant and geo-redundant storage, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and specify the `--sku` parameter:
```azurecli-interactive az storage account update \
- --name <storage-account>
+ --name <storage-account> \
--resource-group <resource_group> \ --sku <sku> ```
+You can also add or remove zone redundancy to your storage account. To change between locally redundant and zone-redundant storage with Azure CLI, call the [az storage account migration start](/cli/azure/storage/account/migration#az-storage-account-migration-start) command and specify the `--sku` parameter:
+
+```azurecli-interactive
+az storage account migration start \
+ -- account-name <string> \
+ -- g <string> \
+ --sku <string> \
+ --no-wait
+```
+
+To track the current migration status of the conversion initiated on your storage account with Azure CLI, use the [az storage account migration show](/cli/azure/storage/account/migration#az-storage-account-migration-show) command:
+
+```azurecli-interactive
+az storage account migration show
+ --account-name <string>
+ - g <sting>
+ -n "default"
+```
+ ### Perform a conversion A redundancy "conversion" is the process of changing the zone-redundancy aspect of a storage account.
-During a conversion, [there is no data loss or application downtime required](#downtime-requirements).
+During a conversion, [there's no data loss or application downtime required](#downtime-requirements).
There are two ways to initiate a conversion:
Customer-initiated conversion adds a new option for customers to start a convers
> > There is no SLA for completion of a customer-initiated conversion. >
-> For more details about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).
+> For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).
Customer-initiated conversion is only available from the Azure portal, not from PowerShell or the Azure CLI. To initiate the conversion, perform the same steps used for changing other replication settings in the Azure portal as described in [Change the replication setting using the portal, PowerShell, or the CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli).
-Customer-initiated conversion is not available in all regions. See the [region limitations](#region) for more details.
+Customer-initiated conversion isn't available in all regions. For more information, see the [region limitations](#region) article.
##### Monitoring customer-initiated conversion progress The status of your customer-initiated conversion is displayed on the **Redundancy** page of the storage account:
-As the conversion request is evaluated and processed, the status should progress through the list shown in the table below:
+As the conversion request is evaluated and processed, the status should progress through the list shown in the following table:
| Status | Explanation | ||--| | Submitted for conversion | The conversion request was successfully submitted for processing. |
-| In Progress<sup>1</sup> | The actual conversion has begun. |
-| Completed<br>**- or -**</br>Failed<sup>2</sup> | The conversion has successfully completed.<br>**- or -**</br>The conversion failed. |
+| In Progress<sup>1</sup> | The actual conversion is in progress. |
+| Completed<br>**- or -**</br>Failed<sup>2</sup> | The conversion is completed successfully.<br>**- or -**</br>The conversion failed. |
-<sup>1</sup> Once initiated, the conversion could take up to 72 hours to actually **begin**. If the conversion does not enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. For more details about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br />
+<sup>1</sup> Once initiated, the conversion could take up to 72 hours to actually **begin**. If the conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br />
<sup>2</sup> If the conversion fails, submit a support request to Microsoft to determine the reason for the failure.<br /> > [!NOTE]
Follow these steps to request a conversion from Microsoft:
- **Problem type**: Choose **Data Migration**. - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**.
- :::image type="content" source="media/redundancy-migration/request-live-migration-problem-desc-portal.png" alt-text="Screenshot showing how to request a conversion - Problem description tab." lightbox="media/redundancy-migration/request-live-migration-problem-desc-portal.png":::
+ :::image type="content" source="media/redundancy-migration/request-live-migration-problem-desc-portal-sml.png" alt-text="Screenshot showing how to request a conversion - Problem description tab." lightbox="media/redundancy-migration/request-live-migration-problem-desc-portal.png":::
1. Select **Next**. The **Recommended solution** tab might be displayed briefly before it switches to the **Solutions** page. On the **Solutions** page, you can check the eligibility of your storage account(s) for conversion: - **Target replication type**: (choose the desired option from the drop-down) - **Storage accounts from**: (enter a single storage account name or a list of accounts separated by semicolons) - Select **Submit**.
- :::image type="content" source="media/redundancy-migration/request-live-migration-solutions-portal.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page." lightbox="media/redundancy-migration/request-live-migration-solutions-portal.png":::
+ :::image type="content" source="media/redundancy-migration/request-live-migration-solutions-portal-sml.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page." lightbox="media/redundancy-migration/request-live-migration-solutions-portal.png":::
-1. Take the appropriate action if the results indicate your storage account is not eligible for conversion. If it is eligible, select **Return to support request**.
+1. Take the appropriate action if the results indicate your storage account isn't eligible for conversion. Otherwise, select **Return to support request**.
1. Select **Next**. If you have more than one storage account to migrate, on the **Details** tab, specify the name for each account, separated by a semicolon.
- :::image type="content" source="media/redundancy-migration/request-live-migration-details-portal.png" alt-text="Screenshot showing how to request a conversion - Additional details tab." lightbox="media/redundancy-migration/request-live-migration-details-portal.png":::
+ :::image type="content" source="media/redundancy-migration/request-live-migration-details-portal-sml.png" alt-text="Screenshot showing how to request a conversion - Additional details tab." lightbox="media/redundancy-migration/request-live-migration-details-portal.png":::
-1. Fill out the additional required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. A support person will contact you to provide any assistance you may need.
+1. Provide the required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. An Azure support agent reviews your case and contacts you to provide assistance.
### Manual migration
-A manual migration provides more flexibility and control than a conversion. You can use this option if you need your data moved by a certain date, or if conversion is [not supported for your scenario](#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. See [Move an Azure Storage account to another region](storage-account-move.md) for more details.
+A manual migration provides more flexibility and control than a conversion. You can use this option if you need your data moved by a certain date, or if conversion [isn't supported for your scenario](#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. For more detail, see [Move an Azure Storage account to another region](storage-account-move.md).
You must perform a manual migration if: - You want to migrate your storage account to a different region. - Your storage account is a block blob account.-- Your storage account includes data in the archive tier and rehydrating the data is not desired.
+- Your storage account includes data in the archive tier and rehydrating the data isn't desired.
> [!IMPORTANT] > A manual migration can result in application downtime. If your application requires high availability, Microsoft also provides a [conversion](#perform-a-conversion) option. A conversion is an in-place migration with no downtime.
Limitations apply to some replication change scenarios depending on:
### Region
-Make sure the region where your storage account is located supports all of the desired replication settings. For example, if you are converting your account to zone-redundant (ZRS, GZRS, or RA-GZRS), make sure your storage account is in a region that supports it. See the lists of supported regions for [Zone-redundant storage](storage-redundancy.md#zone-redundant-storage) and [Geo-zone-redundant storage](storage-redundancy.md#geo-zone-redundant-storage).
+Make sure the region where your storage account is located supports all of the desired replication settings. For example, if you're converting your account to zone-redundant (ZRS, GZRS, or RA-GZRS), make sure your storage account is in a region that supports it. See the lists of supported regions for [Zone-redundant storage](storage-redundancy.md#zone-redundant-storage) and [Geo-zone-redundant storage](storage-redundancy.md#geo-zone-redundant-storage).
> [!IMPORTANT] > [Customer-initiated conversion](#customer-initiated-conversion) from LRS to ZRS is available in all public regions that support ZRS except for the following: >
-> - (Europe) West Europe
+> - (Europe) Italy North
> - (Europe) UK South
+> - (Europe) Poland Central
+> - (Europe) West Europe
+> - (Middle East) Israel Central
> - (North America) Canada Central > - (North America) East US > - (North America) East US 2
Make sure the region where your storage account is located supports all of the d
### Feature conflicts
-Some storage account features are not compatible with other features or operations. For example, the ability to failover to the secondary region is the key feature of geo-redundancy, but other features are not compatible with failover. For more information about features and services not supported with failover, see [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services). Converting an account to GRS, GZRS, or RA-GZRS might be blocked if a conflicting feature is enabled, or it might be necessary to disable the feature later before initiating a failover.
+Some storage account features aren't compatible with other features or operations. For example, the ability to fail over to the secondary region is the key feature of geo-redundancy, but other features aren't compatible with failover. For more information about features and services not supported with failover, see [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services). The conversion of an account to GRS, GZRS, or RA-GZRS might be blocked if a conflicting feature is enabled, or it might be necessary to disable the feature later before initiating a failover.
### Storage account type When planning to change your replication settings, consider the following limitations related to the storage account type.
-Some storage account types only support certain redundancy configurations, which affects whether they can be converted or migrated and, if so, how. For more details on Azure storage account types and the supported redundancy options, see [the storage account overview](storage-account-overview.md#types-of-storage-accounts).
+Some storage account types only support certain redundancy configurations, which affect whether they can be converted or migrated and, if so, how. For more information on Azure storage account types and the supported redundancy options, see [the storage account overview](storage-account-overview.md#types-of-storage-accounts).
The following table provides an overview of redundancy options available for storage account types and whether conversion and manual migration are supported:
The following table provides an overview of redundancy options available for sto
| Standard general purpose v1 | &#x2705; | | <sup>3</sup> | | &#x2705; | | ZRS Classic<sup>4</sup><br /><sub>(available in standard general purpose v1 accounts)</sub> | &#x2705; | | | |
-<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-requested-conversion); [Customer-initiated conversion](#customer-initiated-conversion) is not currently supported.<br />
-<sup>2</sup> Managed disks are available for LRS and ZRS, though ZRS disks have some [limitations](../../virtual-machines/disks-redundancy.md#limitations). If a LRS disk is regional (no zone specified) it may be converted by [changing the SKU](../../virtual-machines/disks-convert-types.md). If a LRS disk is zonal, then it can only be manually migrated by following the process in [Migrate your managed disks](../../reliability/migrate-vm.md#migrate-your-managed-disks). You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
-<sup>3</sup> If your storage account is v1, you'll need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br />
-<sup>4</sup> ZRS Classic storage accounts have been deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
+<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-requested-conversion); [Customer-initiated conversion](#customer-initiated-conversion) isn't currently supported.<br />
+<sup>2</sup> Managed disks are available for LRS and ZRS, though ZRS disks have some [limitations](../../virtual-machines/disks-redundancy.md#limitations). If an LRS disk is regional (no zone specified), it can be converted by [changing the SKU](../../virtual-machines/disks-convert-types.md). If an LRS disk is zonal, then it can only be manually migrated by following the process in [Migrate your managed disks](../../reliability/migrate-vm.md#migrate-your-managed-disks). You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
+<sup>3</sup> If your storage account is v1, you need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br />
+<sup>4</sup> ZRS Classic storage accounts are deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
#### Converting ZRS Classic accounts
The following table provides an overview of redundancy options available for sto
ZRS Classic was available only for **block blobs** in general-purpose V1 (GPv1) storage accounts. For more information about storage accounts, see [Azure storage account overview](storage-account-overview.md).
-ZRS Classic accounts asynchronously replicated data across data centers within one to two regions. Replicated data was not available unless Microsoft initiated a failover to the secondary. A ZRS Classic account can't be converted to or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
+ZRS Classic accounts asynchronously replicated data across data centers within one to two regions. Replicated data wasn't available unless Microsoft initiated a failover to the secondary. A ZRS Classic account can't be converted to or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
To change ZRS Classic to another replication type, use one of the following methods:
az storage account update -g <resource_group> -n <storage_account> --set kind=St
To manually migrate your ZRS Classic account data to another type of replication, follow the steps to [perform a manual migration](#manual-migration).
-If you want to migrate your data into a zone-redundant storage account located in a region different from the source account, you must perform a manual migration. For more details, see [Move an Azure Storage account to another region](storage-account-move.md).
+If you want to migrate your data into a zone-redundant storage account located in a region different from the source account, you must perform a manual migration. For more information, see [Move an Azure Storage account to another region](storage-account-move.md).
### Access tier
-Make sure the desired redundancy option supports the access tiers currently used in the storage account. For example, ZRS, GZRS and RA-GZRS storage accounts do not support the archive tier. See [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md) for more details. To convert an LRS, GRS or RA-GRS account to one that supports zone-redundancy, first move the archived blobs to a storage account that supports blobs in the archive tier. Then convert the source account to ZRS, GZRS and RA-GZRS.
+Make sure the desired redundancy option supports the access tiers currently used in the storage account. For example, ZRS, GZRS and RA-GZRS storage accounts don't support the archive tier. For more information, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md). To convert an LRS, GRS or RA-GRS account to one that supports zone-redundancy, first move the archived blobs to a storage account that supports blobs in the archive tier. Then convert the source account to ZRS, GZRS and RA-GZRS.
-To switch an LRS storage account that contains blobs in the archive tier to GRS or RA-GRS, you must first rehydrate all archived blobs to the Hot or Cool tier or perform a [manual migration](#manual-migration).
+An LRS storage account containing blobs in the archive tier can be switched to GRS or RA-GRS after rehydrating all archived blobs to the Hot or Cool tier. You can also perform a [manual migration](#manual-migration).
> [!TIP] > Microsoft recommends that you avoid changing the redundancy configuration for a storage account that contains archived blobs if at all possible, because rehydration operations can be costly and time-consuming. But if you must change it, a [manual migration](#manual-migration) can save you the expense of rehydration. ### Protocol support
-Converting your storage account to zone-redundancy (ZRS, GZRS or RA-GZRS) is not supported if either of the following is true:
+You can't convert storage accounts to zone-redundancy (ZRS, GZRS or RA-GZRS) if either of the following cases are true:
- NFSv3 protocol support is enabled for Azure Blob Storage - The storage account contains Azure Files NFSv4.1 shares ### Failover and failback
-After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
+After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). [Initiate the failover](storage-initiate-account-failover.md#initiate-the-failover).
-If you performed an account failover for your GRS or RA-GRS account, the account is locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it will be LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it will be LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to add zone-redundancy.
+If you performed a customer-managed account failover to recover from an outage for your GRS or RA-GRS account, the account becomes locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported, even for so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it remains LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it remains LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, perform a manual migration to add zone-redundancy.
## Downtime requirements
-During a [conversion](#perform-a-conversion), you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+During a [conversion](#perform-a-conversion), you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and no data is lost during a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
If you choose to perform a manual migration, downtime is required but you have more control over the timing of the migration process. ## Timing and frequency
-If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to actually **begin**. It could take longer to start if you [request a conversion by opening a support request](#support-requested-conversion). If a customer-initiated conversion does not enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
+If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to actually **begin**. It could take longer to start if you [request a conversion by opening a support request](#support-requested-conversion). If a customer-initiated conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
> [!IMPORTANT] > There is no SLA for completion of a conversion. If you need more control over when a conversion begins and finishes, consider a [Manual migration](#manual-migration). Generally, the more data you have in your account, the longer it takes to replicate that data to other zones or regions.
After a zone-redundancy conversion, you must wait at least 72 hours before chang
## Costs associated with changing how data is replicated
-Ordering from the least to the most expensive, Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-GZRS.
+Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-GZRS, ordered by price where LRS is the least expensive and RA-GZRS is the most expensive.
-The costs associated with changing how data is replicated in your storage account depend on which [aspects of your redundancy configuration](#options-for-changing-the-replication-type) you change. A combination of data storage and egress bandwidth pricing determine the cost of making a change. For details on pricing, see [Azure Storage Pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
+The costs associated with changing how data is replicated in your storage account depend on which [aspects of your redundancy configuration](#options-for-changing-the-replication-type) you change. A combination of data storage and egress bandwidth pricing determines the cost of making a change. For details on pricing, see [Azure Storage Pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
-If you add zone-redundancy in the primary region, there is no initial cost associated with making that conversion, but the ongoing data storage cost will be higher due to the additional replication and storage space required.
+If you add zone-redundancy in the primary region, there's no initial cost associated with making that conversion, but the ongoing data storage cost is higher due to the increased replication and storage space required.
-If you add geo-redundancy, you will incur an egress bandwidth charge at the time of the change because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region.
+Geo-redundancy incurs an egress bandwidth charge at the time of the change because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region.
-If you remove geo-redundancy (change from GRS to LRS), there is no cost for making the change, but your replicated data is deleted from the secondary location.
+If you remove geo-redundancy (change from GRS to LRS), there's no cost for making the change, but your replicated data is deleted from the secondary location.
> [!IMPORTANT] > If you remove read access to the secondary region (RA) (change from RA-GRS to GRS or LRS), that account is billed as RA-GRS for an additional 30 days beyond the date that it was converted.
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Previously updated : 09/22/2023 Last updated : 01/04/2024
# Azure storage disaster recovery planning and failover
-Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. Key components of a good disaster recovery plan include strategies for:
+Microsoft strives to ensure that Azure services are always available. However, unplanned service outages might occasionally occur. Key components of a good disaster recovery plan include strategies for:
- [Data protection](../blobs/data-protection-overview.md) - [Backup and restore](../../backup/index.yml)
Microsoft strives to ensure that Azure services are always available. However, u
- [Failover](#plan-for-storage-account-failover) - [Designing applications for high availability](#design-for-high-availability)
-This article focuses on failover for globally redundant storage accounts (GRS, GZRS, and RA-GZRS), and how to design your applications to be highly available if there's an outage and subsequent failover.
+This article describes the options available for globally redundant storage accounts, and provides recommendations for developing highly available applications and testing your disaster recovery plan.
## Choose the right redundancy option
-Azure Storage maintains multiple copies of your storage account to ensure durability and high availability. Which redundancy option you choose for your account depends on the degree of resiliency you need for your applications.
+Azure Storage maintains multiple copies of your storage account to ensure that availability and durability targets are met, even in the face of failures. The way in which data is replicated provides differing levels of protection. Each option offers its own benefits, so the option you choose depends upon the degree of resiliency your applications require.
-With locally redundant storage (LRS), three copies of your storage account are automatically stored and replicated within a single datacenter. With zone-redundant storage (ZRS), a copy is stored and replicated in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
+Locally redundant storage (LRS), the lowest-cost redundancy option, automatically stores and replicates three copies of your storage account within a single datacenter. Although LRS protects your data against server rack and drive failures, it doesn't account for disasters such as fire or flooding within a datacenter. In the face of such disasters, all replicas of a storage account configured to use LRS might be lost or unrecoverable.
-Recovery of a single copy of a storage account occurs automatically with LRS and ZRS.
+By comparison, zone-redundant storage (ZRS) retains a copy of a storage account and replicates it in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
+
+Recovery of a single copy of a storage account occurs automatically with both LRS and ZRS.
### Globally redundant storage and failover
-With globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region at least hundreds of miles away. This allows you to recover your data if there's an outage in the primary region. A feature that distinguishes globally redundant storage from LRS and ZRS is the ability to fail over to the secondary region if there's an outage in the primary region. The process of failing over updates the DNS entries for your storage account service endpoints such that the endpoints for the secondary region become the new primary endpoints for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoints.
+Geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), and read-access geo-zone-redundant storage (RA-GZRS) are examples of globally redundant storage options.
+When configured to use globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region located hundreds of miles away. This level of redundancy allows you to recover your data if there's an outage throughout the entire primary region.
+
+Unlike LRS and ZRS, globally redundant storage also allows for failover to a secondary region if there's an outage in the primary region. During the failover process, DNS entries for your storage account service endpoints are automatically updated such that the secondary region's endpoints become the new primary endpoints. Once the failover is complete, clients can begin writing to the new primary endpoints.
-RA-GRS and RA-GZRS redundancy configurations provide geo-redundant storage with the added benefit of read access to the secondary endpoint if there is an outage in the primary region. If an outage occurs in the primary endpoint, applications configured for read access to the secondary region and designed for high availability can continue to read from the secondary endpoint. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
+Read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS) also provide geo-redundant storage, but offer the added benefit of read access to the secondary endpoint. These options are ideal for applications designed for high availability business-critical applications. If the primary endpoint experiences an outage, applications configured for read access to the secondary region can continue to operate. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
-For more information about redundancy in Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
+For more information about redundancy for Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
## Plan for storage account failover
-Azure Storage accounts support two types of failover:
+Azure Storage accounts support three types of failover:
+- [**Customer-managed planned failover (preview)**](#customer-managed-planned-failover-preview) - Customers can manage storage account failover to test their disaster recovery plan.
- [**Customer-managed failover**](#customer-managed-failover) - Customers can manage storage account failover if there's an unexpected service outage.-- [**Microsoft-managed failover**](#microsoft-managed-failover) - Potentially initiated by Microsoft only in the case of a severe disaster in the primary region. <sup>1,2</sup>
+- [**Microsoft-managed failover**](#microsoft-managed-failover) - Potentially initiated by Microsoft due to a severe disaster in the primary region. <sup>1,2</sup>
+
+<sup>1</sup> Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).<br/>
+<sup>2</sup> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances.
+
+Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover:
+
+| Type | Failover Scope | Use case | Expected data loss | HNS supported |
+|-|--|-|||
+| Customer-managed planned failover | Storage account | The storage service endpoints for the primary and secondary regions are available, and you want to perform disaster recovery testing. <br></br> The storage service endpoints for the primary region are available, but a networking or compute outage in the primary region is preventing your workloads from functioning properly. | [No](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#azure-data-lake-storage-gen2) |
+| Customer-managed failover | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#azure-data-lake-storage-gen2) |
+| Microsoft-managed | Entire region or scale unit | The primary region becomes unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
+
+### Customer-managed planned failover (preview)
-<sup>1</sup>Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more details see [Microsoft-managed failover](#microsoft-managed-failover). <br/>
-<sup>2</sup> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances. <br/>
+> [!IMPORTANT]
+> Customer-managed planned failover is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowSoftFailover` as the feature name. The provider name for this preview feature is **Microsoft.Storage**.
-Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover :
+To test your disaster recovery plan, you can perform a planned failover of your storage account from the primary to the secondary region. During the failover process, the original secondary region becomes the new primary and the original primary becomes the new secondary. After the failover is complete, users can proceed to access data in the new primary region and administrators can validate their disaster recovery plan. The storage account must be available in both the primary and secondary regions to perform a planned failover.
-| Type | Failover Scope | Use case | Expected data loss | HNS supported |
-||--|-|||
-| Customer-managed | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes ](#azure-data-lake-storage-gen2)*[(In preview)](#azure-data-lake-storage-gen2)* |
-| Microsoft-managed | Entire region or scale unit | The primary region becomes completely unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
+You can also use this type of failover during a partial networking or compute outage in your primary region. This type of outage occurs, for example, when an outage in your primary region prevents your workloads from functioning properly, but leaves your storage service endpoints available.
+
+During customer-managed planned failover and failback, data loss isn't expected as long as the primary and secondary regions are available throughout the entire process. See [Anticipate data loss and inconsistencies](#anticipate-data-loss-and-inconsistencies).
+
+To thoroughly understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How failover for disaster recovery testing (preview) works](storage-failover-customer-managed-planned.md).
### Customer-managed failover
+Although the two types of customer-managed failover work in a similar manner, there are primarily two ways in which they differ:
+
+- The management of the redundancy configurations within the primary and secondary regions (LRS or ZRS).
+- The status of the geo-redundancy configuration at each stage of the failover and failback process.
+
+The following table compares the redundancy state of a storage account after a failover of each type:
+
+| Result of failover on... | Customer-managed planned failover | Customer-managed failover |
+|--|-|-|
+| ...the secondary region | The secondary region becomes the new primary | The secondary region becomes the new primary |
+| ...the original primary region | The original primary region becomes the new secondary |The copy of the data in the original primary region is deleted |
+| ...the account redundancy configuration | The storage account is converted to GRS | The storage account is converted to LRS |
+| ...the geo-redundancy configuration | Geo-redundancy is retained | Geo-redundancy is lost |
+
+The following table summarizes the resulting redundancy configuration at every stage of the failover and failback process for each type of failover:
+
+| Original <br> configuration | After <br> failover | After re-enabling <br> geo redundancy | After <br> failback | After re-enabling <br> geo redundancy |
+||||||
+| **Customer-managed planned failover** | | | | |
+| GRS | GRS | n/a <sup>2</sup> | GRS | n/a <sup>2</sup> |
+| GZRS | GRS | n/a <sup>2</sup> | GZRS | n/a <sup>2</sup> |
+| **Customer-managed failover** | | | | |
+| GRS | LRS | GRS <sup>1</sup> | LRS | GRS <sup>1</sup> |
+| GZRS | LRS | GRS <sup>1</sup> | ZRS | GZRS <sup>1</sup> |
+
+<sup>1</sup> Geo-redundancy is lost during a failover to recover from an outage and must be manually reconfigured.<br>
+<sup>2</sup> Geo-redundancy is retained during a failover for disaster recovery testing and doesn't need to be manually reconfigured.
+ If the data endpoints for the storage services in your storage account become unavailable in the primary region, you can fail over to the secondary region. After the failover is complete, the secondary region becomes the new primary and users can proceed to access data in the new primary region.
-To fully understand the impact that customer-managed account failover would have on your users and applications, it is helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
+To understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
### Microsoft-managed failover
-In extreme circumstances where the original primary region is deemed unrecoverable within a reasonable amount of time due to a major disaster, Microsoft **may** initiate a regional failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you won't have write access to your storage account. Your applications can read from the secondary region if your storage account is configured for RA-GRS or RA-GZRS.
+In extreme circumstances such as major disasters, Microsoft **may** initiate a regional failover. Regional failovers are uncommon, and only take place when the original primary region is deemed unrecoverable within a reasonable amount of time. During these events, no action on your part is required. If your storage account is configured for RA-GRS or RA-GZRS, your applications can read from the secondary region during a Microsoft-managed failover. However, you don't have write access to your storage account until the failover process is complete.
> [!IMPORTANT] > Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
-> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use [customer-managed account failover](#customer-managed-failover).
+> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region, datacenter or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. If you need the ability to selectively failover your individual storage accounts, use [customer-managed planned failover](#customer-managed-planned-failover-preview).
+ ### Anticipate data loss and inconsistencies > [!CAUTION]
-> Storage account failover usually involves some data loss, and potentially file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
+> Storage account failover usually involves some amount data loss, and could also potentially introduce file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
-Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, the most recent writes may not yet have been copied to the secondary.
+Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, it's possible that the most recent writes might not yet be copied to the secondary.
-When a failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that hasn't also been copied to the secondary region is lost permanently.
+When a failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary region is maintained when the failover happens. However, any data written to the primary that doesn't yet exist within the secondary region is lost permanently.
The new primary region is configured to be locally redundant (LRS) after the failover.
You also might experience file or data inconsistencies if your storage accounts
#### Last sync time
-The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property if there's an outage to estimate the amount of data loss you may incur by initiating an account failover.
+The **Last Sync Time** property indicates the most recent time that data from the primary region was also written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary. By contrast, data and metadata written after the last sync time might not yet be copied to the secondary and could potentially be lost. During an outage, use this property to estimate the amount of data loss you might incur by initiating an account failover.
-As a best practice, design your application so that you can use the last sync time to evaluate expected data loss. For example, if you're logging all write operations, then you can compare the time of your last write operations to the last sync time to determine which writes haven't been synced to the secondary.
+As a best practice, design your application so that you can use **Last Sync Time** to evaluate expected data loss. For example, logging all write operations allows you to compare the times of your last write operation to the last sync time. This method enables you to determine which writes aren't yet synced to the secondary and are in danger of being lost.
For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md). #### File consistency for Azure Data Lake Storage Gen2
-Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. This means if an outage in the primary region occurs, it is possible that only some of the files in a container or directory might have successfully replicated to the secondary region. Consistency for all files in a container or directory after a storage account failover is not guaranteed.
+Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. Because replication occurs at this level, an outage in the primary region might prevent some of the files within a container or directory from successfully replicating to the secondary region. Consistency for all files within a container or directory after a storage account failover isn't guaranteed.
#### Change feed and blob data inconsistencies
-Storage account failover of geo-redundant storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of both updates to the change logs and the replication of blob data from the primary to the secondary region. The only situation in which inconsistencies would not be expected is when all of the current log records have been successfully flushed to the log files, and all of the storage data has been successfully replicated from the primary to the secondary region.
+Geo-redundant failover of storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled could result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of change log updates and data replication between the primary and secondary regions. To avoid inconsistencies, ensure that all log records are flushed to the log files, and that all storage data is replicated from the primary to the secondary region.
-For information about how change feed works see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
+For more information about change feed, see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
-Keep in mind that other storage account features require the change feed to be enabled such as [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
+Keep in mind that other storage account features also require the change feed to be enabled. These features include [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
#### Point-in-time restore inconsistencies
-Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure Portal.
-
-For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you can't restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past.
+Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure portal.
### The time and cost of failing over
-The time it takes for failover to complete after being initiated can vary, although it typically takes less than one hour.
+The time it takes for a customer-initiated failover to complete after being initiated can vary, although it typically takes less than one hour.
-A customer-managed failover loses its geo-redundancy after a failover (and failback). Your storage account is automatically converted to locally redundant storage (LRS) in the new primary region during a failover, and the storage account in the original primary region is deleted.
+A customer-managed planned failover doesn't lose its geo-redundancy after a failover and subsequent failback. However, a customer-managed failover to recover from an outage does lose its geo-redundancy after a failover (and failback). In that type of failover, your storage account is automatically converted to locally redundant storage (LRS) in the new primary region during a failover, and the storage account in the original primary region is deleted.
-You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. Also, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy, which will incur a cost. For more information about pricing, see:
+You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but re-replicating data to the new secondary region incurs a charge. Additionally, all archived blobs need to be rehydrated to an online tier before the account can be reconfigured for geo-redundancy. This rehydration also incurs an extra charge. For more information about pricing, see:
- [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/) - [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/)
-After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
+After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. The amount of time it takes for replication to complete depends on several factors. These factors include:
- The number and size of the objects in the storage account. Replicating many small objects can take longer than replicating fewer and larger objects. - The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.
All geo-redundant offerings support Microsoft-managed failover. In addition, som
| Type of failover | GRS/RA-GRS | GZRS/RA-GZRS | ||||
-| **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
-| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
+| **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
+| **Customer-managed planned failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
+| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
#### Classic storage accounts > [!IMPORTANT]
-> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
+> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as the *classic* model, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
>
-> if there's a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+> During a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
#### Azure Data Lake Storage Gen2
All geo-redundant offerings support Microsoft-managed failover. In addition, som
> Customer-managed account failover for accounts that have a hierarchical namespace (Azure Data Lake Storage Gen2) is currently in PREVIEW and only supported in the following regions: > > - (Asia Pacific) Central India
+> - (Asia Pacific) South East Asia
+> - (Europe) North Europe
> - (Europe) Switzerland North > - (Europe) Switzerland West
+> - (Europe) West Europe
> - (North America) Canada Central
+> - (North America) East US 2
+> - (North America) South Central US
> > To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowHNSAccountFailover` as the feature name. > > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-> if there's a significant disaster that affects the primary region, Microsoft will manage the failover for accounts with a hierarchical namespace. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+> During a significant disaster that affects the primary region, Microsoft will manage the failover for accounts with a hierarchical namespace. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
### Unsupported features and services The following features and services aren't supported for account failover: -- Azure File Sync doesn't support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.
+- Azure File Sync doesn't support storage account failover. Storage accounts containing Azure file shares and being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so causes sync to stop working and can also result in the unexpected data loss of any newly tiered files.
- A storage account containing premium block blobs can't be failed over. Storage accounts that support premium block blobs don't currently support geo-redundancy. - Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md).-- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). If you want to resume using SFTP after the failover is complete, simply [re-enable it](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support).
+- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). You can [re-enable sftp](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support) if you want to resume using it after the failover is complete.
- Network File System (NFS) 3.0 (NFSv3) isn't supported for storage account failover. You can't create a storage account configured for global-redundancy with NFSv3 enabled.
-### Failover is not for account migration
+### Failover isn't for account migration
-Storage account failover shouldn't be used as part of your data migration strategy. Failover is a temporary solution to a service outage. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
+Storage account failover is a temporary solution to a service outage and shouldn't be used as part of your data migration strategy. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
### Storage accounts containing archived blobs
-Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-failover) is complete, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
+Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-failover) is complete, all archived blobs must be rehydrated to an online tier before the account can be configured for geo-redundancy.
### Storage resource provider
-Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis of all actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API enables you to manage the storage account and related resources.
-
-After a failover is complete, clients can again read and write Azure Storage data in the new primary region. However, the Azure Storage resource provider does not fail over, so resource management operations must still take place in the primary region. If the primary region is unavailable, you will not be able to perform management operations on the storage account.
+Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis for all actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API enables you to manage the storage account and related resources.
-Because the Azure Storage resource provider does not fail over, the [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property will return the original primary location after the failover is complete.
+As part of an account failover, the Azure Storage resource provider also fails over. As a result, resource management operations can occur in the new primary region after the failover is complete. The [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property returns the new primary location.
### Azure virtual machines
-Azure virtual machines (VMs) don't fail over as part of an account failover. If the primary region becomes unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover. Also, there's a potential data loss associated with the account failover. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
-
-Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
+Azure virtual machines (VMs) don't fail over as part of a storage account failover. Any VMs that failed over to a secondary region in response to an outage need to be recreated after the failover completes. Keep in mind that account failover can potentially result in data loss, including data stored in a temporary disk when the VM is shut down. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
### Azure unmanaged disks
-As a best practice, Microsoft recommends converting unmanaged disks to managed disks. However, if you need to fail over an account that contains unmanaged disks attached to Azure VMs, you will need to shut down the VM before initiating the failover.
+Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. Before a failover can be initiated on an account containing unmanaged disks attached to Azure VMs, the disks must be shut down. For this reason, Microsoft's recommended best practices include converting any unmanaged disks to managed disks.
-Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. To perform the failover, follow these steps:
+To perform a failover on an account containing unmanaged disks, follow these steps:
-1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they are attached. Doing so will make it easier to reattach the disks after the failover.
-2. Shut down the VM.
-3. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
-4. Wait until the **Last Sync Time** has updated, and is later than the time at which you deleted the VM. This step is important, because if the secondary endpoint hasn't been fully updated with the VHD files when the failover occurs, then the VM may not function properly in the new primary region.
-5. Initiate the account failover.
-6. Wait until the account failover is complete and the secondary region has become the new primary region.
-7. Create a VM in the new primary region and reattach the VHDs.
-8. Start the new VM.
+1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they're attached. Doing so will make it easier to reattach the disks after the failover.
+1. Shut down the VM.
+1. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
+1. Wait until the **Last Sync Time** updates, and ensure that it's later than the time at which you deleted the VM. This step ensures that the secondary endpoint is fully updated with the VHD files when the failover occurs, and that the VM functions properly in the new primary region.
+1. Initiate the account failover.
+1. Wait until the account failover is complete and the secondary region becomes the new primary region.
+1. Create a VM in the new primary region and reattach the VHDs.
+1. Start the new VM.
Keep in mind that any data stored in a temporary disk is lost when the VM is shut down. ### Copying data as an alternative to failover
-If your storage account is configured for read access to the secondary region, then you can design your application to read from the secondary endpoint. If you prefer not to fail over if there's an outage in the primary region, you can use tools such as [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy data from your storage account in the secondary region to another storage account in an unaffected region. You can then point your applications to that storage account for both read and write availability.
+As previously mentioned, you can maintain high availability by configuring applications to use a storage account configured for read access to a secondary region. However, if you prefer not to fail over during an outage within the primary region, you can manually copy your data as an alternative. Tools such as [AzCopy](./storage-use-azcopy-v10.md) and [Azure PowerShell](/powershell/module/az.storage/) enable you to copy data from your storage account in the affected region to another storage account in an unaffected region. After the copy operation is complete, you can reconfigure your applications to use the storage account in the unaffected region for both read and write availability.
## Design for high availability
-It's important to design your application for high availability from the start. Refer to these Azure resources for guidance in designing your application and planning for disaster recovery:
+It's important to design your application for high availability from the start. Refer to these Azure resources for guidance when designing your application and planning for disaster recovery:
- [Designing resilient applications for Azure](/azure/architecture/framework/resiliency/app-design): An overview of the key concepts for architecting highly available applications in Azure. - [Resiliency checklist](/azure/architecture/checklist/resiliency-per-service): A checklist for verifying that your application implements the best design practices for high availability. - [Use geo-redundancy to design highly available applications](geo-redundant-design.md): Design guidance for building applications to take advantage of geo-redundant storage. - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md): A tutorial that shows how to build a highly available application that automatically switches between endpoints as failures and recoveries are simulated.
-Keep in mind these best practices for maintaining high availability for your Azure Storage data:
+Refer to these best practices to maintain high availability for your Azure Storage data:
-- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs if there's a regional disaster.
+- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs from a regional disaster.
- **Block blobs:** Turn on [soft delete](../blobs/soft-delete-blob-overview.md) to protect against object-level deletions and overwrites, or copy block blobs to another storage account in a different region using [AzCopy](./storage-use-azcopy-v10.md), [Azure PowerShell](/powershell/module/az.storage/), or the [Azure Data Movement library](storage-use-data-movement-library.md). - **Files:** Use [Azure Backup](../../backup/azure-file-share-backup-overview.md) to back up your file shares. Also enable [soft delete](../files/storage-files-prevent-file-share-deletion.md) to protect against accidental file share deletions. For geo-redundancy when GRS isn't available, use [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy your files to another storage account in a different region. - **Tables:** use [AzCopy](./storage-use-azcopy-v10.md) to export table data to another storage account in a different region. ## Track outages
-Customers may subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
+Customers can subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your application should expose write failures in a way that alerts you to the possibility of an outage in the primary region.
Microsoft also recommends that you design your application to prepare for the po
- [Use geo-redundancy to design highly available applications](geo-redundant-design.md) - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md) - [Azure Storage redundancy](storage-redundancy.md)-- [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md)-
+- [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md)
+- [How failover for disaster recovery testing (preview) works](storage-failover-customer-managed-planned.md)
storage Storage Failover Customer Managed Planned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-planned.md
+
+ Title: How customer-managed planned failover works
+
+description: Azure Storage supports account failover of geo-redundant storage accounts for disaster recovery testing and planning. Learn what happens to your storage account and storage services during a customer-managed planned failover (preview) to the secondary region to perform disaster recovery testing and planning.
+++++ Last updated : 12/12/2023+++++
+# How customer-managed planned failover works (preview)
+
+Customer-managed storage account planned failover enables you to fail over your entire geo-redundant storage account to the secondary region to do disaster recovery testing. During failover, the original secondary region becomes the new primary and all storage service endpoints for blobs, tables, queues and files are redirected to the new primary region. After testing is complete, you can perform another failover operation to *fail back* to the original primary region. A *failback* is an operation restores a storage account to its original regional configuration.
+
+This article describes what happens during a customer-managed planned storage account failover and failback at every stage of the process. To understand how a failover due to an unexpected storage endpoint outage works, see [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md).
+
+> [!IMPORTANT]
+> Customer-managed planned failover is currently in PREVIEW.
+>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowSoftFailover` as the feature name.
+
+## Redundancy management during failover and failback
+
+> [!TIP]
+> To understand the varying redundancy states during the storage account failover and failback process in detail, see [Azure Storage redundancy](storage-redundancy.md) for definitions of each.
+
+Azure storage provides a wide variety of redundancy options to help protect your data.
+
+Locally redundant storage (LRS) automatically maintains three copies of your storage account within a single datacenter. LRS is the least expensive replication option, but isn't recommended for applications requiring high availability or durability. Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location, and your data is still accessible for both read and write operations if a zone becomes unavailable.
+
+Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) redundancy options can be used to ensure that your data is highly durable. GRS and RA-GRS use LRS to replicate your data three times locally within both the primary and secondary regions. Configuring your account for read access (RA) allows your data to be read from the secondary region, as long as the region's storage service endpoints are available.
+
+Geo-zone-redundant storage (GZRS) and read-access geo-zone-redundant storage (GZRS) use ZRS replication within the primary region, and LRS within the secondary. As with RA-GRS, configuring RA allows you to read data from the secondary region as long as the storage service endpoints to that region are available.
+
+During the planned failover process, the storage service endpoints to the primary region become read-only and any remaining updates are allowed to finish replicating to the secondary region. Afterward, storage service endpoint DNS entries are switched. Your storage account's secondary endpoints become the new primary endpoints, and the original primary endpoints become the new secondary. Data replication within each region remains unchanged even though the primary and secondary regions are switched. Replication within the new primary is always configured to use LRS, and replication within the original primary remains the same, whether LRS or ZRS.
+
+Azure stores the original redundancy configuration of your storage account in the account's metadata, allowing you eventually fail back when ready.
+
+After failover, the new redundancy configuration of your storage account temporarily becomes GRS. The way in which data is replicated within the primary region at a given point in time determines the zone-redundancy configuration of the storage account. Replication within the new primary is always configured to use LRS, so the account is temporarily nonzonal. Azure immediately begins copying data from the new primary region to the new secondary. If your storage account's original secondary region was configured for RA, access is configured for the new secondary region during failover and failback.
+
+The failback process is essentially the same as the failover process except Azure stores the original redundancy configuration of your storage account and restores it to its original state upon failback. So, if your storage account was originally configured as GZRS, the storage account will be GZRS after failback.
+
+> [!NOTE]
+> Unlike [customer-managed failover](storage-failover-customer-managed-unplanned.md), during planned failover, replication from the primary to secondary region is allowed to finish before the DNS entries for the endpoints are changed to the new secondary. Because of this, data loss is not expected during failover or failback as long as both the primary and secondary regions are available throughout the process.
+
+## How to initiate a failover
+
+To learn how to initiate a failover, see [Initiate a storage account failover](storage-initiate-account-failover.md).
+
+## The failover and failback process
+
+The following diagrams show what happens during a customer-managed planned failover and failback of a storage account.
+
+## [GRS/RA-GRS](#tab/grs-ra-grs)
+
+Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GRS:
++
+### The failover process (GRS/RA-GRS)
+
+Begin disaster recovery testing by initiating a failover of your storage account to the secondary region. The following steps describe the failover process, and the subsequent image provides illustration:
+
+1. The original primary region becomes read only.
+1. Replication of all data from the primary region to the secondary region completes.
+1. DNS entries for storage service endpoints in the secondary region are promoted and become the new primary endpoints for your storage account.
+
+The failover typically takes about an hour.
++
+After the failover is complete, the original primary region becomes the new secondary (1) and the original secondary region becomes the new primary (2). The URIs for the storage service endpoints for blobs, tables, queues, and files remain the same but their DNS entries are changed to point to the new primary region (3). Users can resume writing data to the storage account in the new primary region and the data is then copied asynchronously to the new secondary (4) as shown in the following image:
++
+While in the failover state, perform your disaster recovery testing.
+
+### The failback process (GRS/RA-GRS)
+
+After testing is complete, perform another failover to failback to the original primary region. During the failover process, as shown in the following image:
+
+1. The original primary region becomes read only.
+1. All data finishes replicating from the current primary region to the current secondary region.
+1. The DNS entries for the storage service endpoints are changed to point back to the region that was the primary before the initial failover was performed.
+
+The failback typically takes about an hour.
++
+After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1) while replication to the original secondary (2) continues as before the failover:
++
+## [GZRS/RA-GZRS](#tab/gzrs-ra-gzrs)
+
+Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GZRS:
++
+### The failover process (GZRS/RA-GZRS)
+
+Begin disaster recovery testing by initiating a failover of your storage account to the secondary region. The following steps describe the failover process, and the subsequent image provides illustration:
+
+1. The current primary region becomes read only.
+1. All data finishes replicating from the primary region to the secondary region.
+1. Storage service endpoint DNS entries are switched. Your storage account's endpoints in the secondary region become your new primary endpoints.
+
+The failover typically takes about an hour.
++
+After the failover is complete, the original primary region becomes the new secondary (1) and the original secondary region becomes the new primary (2). The URIs for the storage service endpoints for blobs, tables, queues, and files remain the same but are pointing to the new primary region (3). Users can resume writing data to the storage account in the new primary region and the data is then copied asynchronously to the new secondary (4) as shown in the following image:
++
+While in the failover state, perform your disaster recovery testing.
+
+### The failback process (GZRS/RA-GZRS)
+
+When testing is complete, perform another failover to fail back to the original primary region. The following image illustrates the steps involved in the failover process.
+
+1. The current primary region becomes read only.
+1. All data finishes replicating from the current primary region to the current secondary region.
+1. The DNS entries for the storage service endpoints are changed to point back to the region that was the primary before the initial failover was performed.
+
+The failback typically takes about an hour.
++
+After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1) while replication to the original secondary (2) continues as before the failover:
++++
+## See also
+
+- [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md)
+- [Initiate an account failover](storage-initiate-account-failover.md)
+- [How customer-managed failover works](storage-failover-customer-managed-unplanned.md)
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
Title: How Azure Storage account customer-managed failover works
+ Title: How Azure Storage account customer-managed failover to recover from an outage in the primary regin works
description: Azure Storage supports account failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed failover to the secondary region if the primary endpoint becomes unavailable.
Previously updated : 09/22/2023 Last updated : 09/24/2023
-# How customer-managed storage account failover works
+# How customer-managed failover works
Customer-managed failover of Azure Storage accounts enables you to fail over your entire geo-redundant storage account to the secondary region if the storage service endpoints for the primary region become unavailable. During failover, the original secondary region becomes the new primary and all storage service endpoints for blobs, tables, queues and files are redirected to the new primary region. After the storage service endpoint outage has been resolved, you can perform another failover operation to *fail back* to the original primary region.
When a storage account is configured for GRS or RA-GRS redundancy, data is repli
During the customer-managed failover process, the DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account. After failover, the copy of your storage account in the original primary region is deleted and your storage account continues to be replicated three times locally within the original secondary region (the new primary). At that point, your storage account becomes locally redundant (LRS).
-The original and current redundancy configurations are stored in the properties of the storage account to allow you eventually return to your original configuration when you fail back.
+The original and current redundancy configurations are stored in the properties of the storage account. This functionality allows you to eventually return to your original configuration when you fail back.
To regain geo-redundancy after a failover, you will need to reconfigure your account as GRS. (GZRS is not an option post-failover since the new primary will be LRS after the failover). After the account is reconfigured for geo-redundancy, Azure immediately begins copying data from the new primary region to the new secondary. If you configure your storage account for read access (RA) to the secondary region, that access will be available but it may take some time for replication from the primary to make the secondary current.
To regain geo-redundancy after a failover, you will need to reconfigure your acc
> > **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
-The failback process is essentially the same as the failover process except Azure restores the replication configuration to its original state before it was failed over (the replication configuration, not the data). So, if your storage account was originally configured as GZRS, the primary region after faillback becomes ZRS.
+The failback process is essentially the same as the failover process except Azure restores the replication configuration to its original state before it was failed over (the replication configuration, not the data). So, if your storage account was originally configured as GZRS, the primary region after failback becomes ZRS.
After failback, you can configure your storage account to be geo-redundant again. If the original primary region was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was configured as ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see [Change how a storage account is replicated](redundancy-migration.md).
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Title: Initiate a storage account failover
-description: Learn how to initiate an account failover in the event that the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account.
+description: Learn how to initiate an account failover if the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account.
Previously updated : 09/15/2023 Last updated : 09/25/2023 # Initiate a storage account failover
-If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can initiate an account failover. An account failover updates the secondary endpoint to become the primary endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary region. Forced failover enables you to maintain high availability for your applications.
+Azure Storage supports customer-initiated account failover for geo-redundant storage accounts. With account failover, you can initiate the failover process for your storage account if the primary storage service endpoints become unavailable, or to perform disaster recovery testing. The failover updates the DNS entries for the storage service endpoints such that the endpoints for the secondary region become the new primary endpoints for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoints.
-This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or Azure CLI. To learn more about account failover, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or the Azure CLI.
> [!WARNING] > An account failover typically results in some data loss. To understand the implications of an account failover and to prepare for data loss, review [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+To learn more about account failover, see [Azure storage disaster recovery planning and failover](storage-disaster-recovery-guidance.md).
+ [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ## Prerequisites
-Before you can perform an account failover on your storage account, make sure that:
+Before failing over your storage account, review these important articles covered in the [Plan for storage account failover](storage-disaster-recovery-guidance.md#plan-for-storage-account-failover).
-> [!div class="checklist"]
-> - Your storage account is configured for geo-replication (GRS, GZRS, RA-GRS or RA-GZRS). For more information about Azure Storage redundancy, see [Azure Storage redundancy](storage-redundancy.md).
-> - The type of your storage account supports customer-initiated failover. See [Supported storage account types](storage-disaster-recovery-guidance.md#supported-storage-account-types).
-> - Your storage account doesn't have any features or services enabled that are not supported for account failover. See [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services) for a detailed list.
+- **Potential data loss**: When you fail over your storage account in response to an unexpected outage in the primary region, some data loss is expected.
+
+> [!WARNING]
+> It is very important to understand the expectations for loss of data with certain types of failover, and to plan for it. For details on the implications of an account failover and to how to prepare for data loss, see [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+- **Geo-redundancy**: Before you can perform an account failover on your storage account, make sure it's configured for geo-redundancy and that the initial synchronization from the primary to the secondary region is complete. For more information about Azure storage redundancy options, see [Azure Storage redundancy](storage-redundancy.md). If your account isn't configured for geo-redundancy, you can change it. For more information, see [Change how a storage account is replicated](redundancy-migration.md).
+- **Understand the different types of account failover**: There are three types of storage account failover. To learn the use cases for each and how they function differently, see [Plan for storage account failover](storage-disaster-recovery-guidance.md#plan-for-storage-account-failover). This article focuses on how to initiate a *customer-managed failover* to recover from the service endpoints being unavailable in the primary region, or a *customer-managed* ***planned*** *failover* (preview) used primarily to perform disaster recovery testing.
+- **Plan for unsupported features and services**: Review [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services) and take the appropriate action before initiating a failover.
+- **Supported storage account types**: Ensure the type of your storage account supports customer-initiated failover. See [Supported storage account types](storage-disaster-recovery-guidance.md#supported-storage-account-types).
+- **Set your expectations for timing and cost**: The time it takes to fail over after you initiate it can vary, but it typically takes less than one hour. A customer-managed failover associated with an outage in the primary region loses its geo-redundancy configuration after a failover (and failback). Reconfiguring GRS typically incurs extra time and cost. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
## Initiate the failover
-You can initiate an account failover from the Azure portal, PowerShell, or the Azure CLI.
+You can initiate either type of customer-managed failover using the Azure portal, PowerShell, or the Azure CLI.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
You can initiate an account failover from the Azure portal, PowerShell, or the A
To initiate an account failover from the Azure portal, follow these steps: 1. Navigate to your storage account.
-1. Under **Settings**, select **Geo-replication**. The following image shows the geo-replication and failover status of a storage account.
+1. Under **Data management**, select **Redundancy**. The following image shows the geo-redundancy configuration and failover status of a storage account.
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-redundancy.png" alt-text="Screenshot showing redundancy and failover status." lightbox="media/storage-initiate-account-failover/portal-failover-redundancy.png":::
+
+ If your storage account is configured with a hierarchical namespace enabled, the following message is displayed:
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-hns-not-supported.png" alt-text="Screenshot showing that failover isn't supported for hierarchical namespace." lightbox="media/storage-initiate-account-failover/portal-failover-hns-not-supported.png":::
+
+1. Verify that your storage account is configured for geo-redundant storage (GRS, RA-GRS, GZRS or RA-GZRS). If it's not, then select the desired redundancy configuration under **Redundancy** and select **Save** to change it. After changing the geo-redundancy configuration, it will take several minutes for your data to synchronize from the primary to the secondary region. You cannot initiate a failover until the synchronization is complete. You might see the following message on the **Redundancy** page until all of your data is replicated:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-repl-in-progress.png" alt-text="Screenshot showing message indicating synchronization is still in progress." lightbox="media/storage-initiate-account-failover/portal-failover-repl-in-progress.png":::
+
+1. Select **Prepare for failover**. You will be presented with a page similar to the image that follows where you can select the type of failover to perform:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare.png" lightbox="media/storage-initiate-account-failover/portal-failover-prepare.png" alt-text="Screenshot showing the prepare for failover window.":::
+
+ > [!NOTE]
+ > If your storage account is configured with a hierarchical namespace enabled, the `Failover` option will be grayed out.
+1. Select the type of failover to prepare for. The confirmation page varies depending on the type of failover you select.
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare.png" alt-text="Screenshot showing geo-replication and failover status":::
+ **If you select `Failover`**:
-1. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS). If it's not, then select **Configuration** under **Settings** to update your account to be geo-redundant.
-1. The **Last Sync Time** property indicates how far the secondary is behind from the primary. **Last Sync Time** provides an estimate of the extent of data loss that you will experience after the failover is completed. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
-1. Select **Prepare for failover**.
-1. Review the confirmation dialog. When you are ready, enter **Yes** to confirm and initiate the failover.
+ You will see a warning about potential data loss and information about needing to manually reconfigure geo-redundancy after the failover:
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-confirm.png" alt-text="Screenshot showing confirmation dialog for an account failover":::
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare-failover.png" alt-text="Screenshot showing the failover option selected on the Prepare for failover window." lightbox="media/storage-initiate-account-failover/portal-failover-prepare-failover.png":::
+
+ For more information about potential data loss and what happens to your account redundancy configuration during failover, see:
+
+ > [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies)
+ >
+ > [Plan for storage account failover](storage-disaster-recovery-guidance.md#plan-for-storage-account-failover)
+ The **Last Sync Time** property indicates the last time the secondary was synchronized with the primary. The difference between **Last Sync Time** and the current time provides an estimate of the extent of data loss that you will experience after the failover is completed. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
+
+ **If you select `Planned failover`** (preview):
+
+ You will see the **Last Sync Time** value, but notice in the image that follows that the failover will not occur until after all of the remaining data is synchronized to the secondary region.
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare-failover-planned.png" alt-text="Screenshot showing the planned failover option selected on the prepare for failover window." lightbox="media/storage-initiate-account-failover/portal-failover-prepare-failover-planned.png":::
+
+ As a result, data loss is not expected during the failover. Since the redundancy configuration within each region does not change during a planned failover or failback, there is no need to manually reconfigure geo-redundancy after a failover.
+
+1. Review the **Prepare for failover** page. When you are ready, type **yes** and select **Failover** to confirm and initiate the failover process.
+
+ You will see a message indicating the failover is in progress:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-in-progress.png" alt-text="Screenshot showing the failover in-progress message." lightbox="media/storage-initiate-account-failover/portal-failover-in-progress-redundancy.png":::
## [PowerShell](#tab/azure-powershell)
-To use PowerShell to initiate an account failover, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later. For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
+To get the current redundancy and failover information for your storage account, and then initiate a failover, follow these steps:
+
+> [!div class="checklist"]
+> - [Install the Azure Storage preview module for PowerShell](#install-the-azure-storage-preview-module-for-powershell)
+> - [Get the current status of the storage account with PowerShell](#get-the-current-status-of-the-storage-account-with-powershell)
+> - [Initiate a failover of the storage account with PowerShell](#initiate-a-failover-of-the-storage-account-with-powershell)
+### Install the Azure Storage preview module for PowerShell
+
+To use PowerShell to initiate and monitor a **planned** customer-managed account failover (preview) in addition to a customer-initiated failover, install the [Az.Storage 5.2.2-preview module](https://www.powershellgallery.com/packages/Az.Storage/5.2.2-preview). Earlier versions of the module support customer-managed failover (unplanned), but not planned failover. The preview version supports the new `FailoverType` parameter which allows you to specify either `planned` or `unplanned`.
+
+#### Installing and running the preview module on PowerShell 5.1
+
+Microsoft recommends you install and use the latest version of PowerShell, but if you are installing the preview module on Windows PowerShell 5.1, and you get the following error, you will need to [update PowerShellGet to the latest version](/powershell/gallery/powershellget/update-powershell-51) before installing the Az.Storage 5.2.2 preview module:
+
+```Sample
+PS C:\Windows\system32> Install-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowPrerelease
+Install-Module : Cannot process argument transformation on parameter 'RequiredVersion'. Cannot convert value "5.2.2-preview" to type "System.Version". Error: "Input string was not in a correct format."
+At line:1 char:50
++ ... nstall-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowP ...++ ~~~~~~~~~~~~~
+ + CategoryInfo : InvalidData: (:) [Install-Module], ParameterBindingArgumentTransformationException
+ + FullyQualifiedErrorId : ParameterArgumentTransformationError,Install-Module
+```
+
+To install the latest version of PowerShellGet and the Az.Storage preview module, perform the following steps:
+
+1. Run the following command to update PowerShellGet:
+
+ ```powershell
+ Install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
+ ```
+
+1. Close and reopen PowerShell
+1. Install the Az.Storage preview module using the following command:
+
+ ```powershell
+ Install-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowPrerelease
+ ```
+
+1. Determine whether you already have a higher version of the Az.Storage module installed by running the command:
+
+ ```powershell
+ Get-InstalledModule Az.Storage -AllVersions
+ ```
+
+If a higher version such as 5.3.0 or 5.4.0 is also installed, you will need to explicitly import the preview version before using it.
+
+1. Close and reopen PowerShell again
+1. Before running any other commands, import the preview version of the module using the following command:
+
+ ```powershell
+ Import-Module Az.Storage -RequiredVersion 5.2.2
+ ```
+
+1. Verify that the `FailoverType` parameter is supported by running the following command:
+
+ ```powershell
+ Get-Help Invoke-AzStorageAccountFailover -Parameter FailoverType
+ ```
+
+For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+
+### Get the current status of the storage account with PowerShell
+
+Check the status of the storage account before failing over. Examine properties that can affect failing over such as:
+
+- The primary and secondary regions and their status
+- The storage kind and access tier
+- The current failover status
+- The last sync time
+- The storage account SKU conversion status
+
+```powershell
+ # Log in first with Connect-AzAccount
+ Connect-AzAccount
+ # Specify the resource group name and storage account name
+ $rgName = "<your resource group name>"
+ $saName = "<your storage account name>"
+ # Get the storage account information
+ Get-AzStorageAccount `
+ -Name $saName `
+ -ResourceGroupName $rgName `
+ -IncludeGeoReplicationStats
+```
-To initiate an account failover from PowerShell, call the following command:
+To refine the list of properties in the display to the most relevant set, consider replacing the Get-AzStorageAccount command in the example above with the following command:
```powershell
-Invoke-AzStorageAccountFailover -ResourceGroupName <resource-group-name> -Name <account-name>
+Get-AzStorageAccount `
+ -Name $saName `
+ -ResourceGroupName $rgName `
+ -IncludeGeoReplicationStats `
+ | Select-Object Location,PrimaryLocation,SecondaryLocation,StatusOfPrimary,StatusOfSecondary,@{E={$_.Kind};L="AccountType"},AccessTier,LastGeoFailoverTime,FailoverInProgress,StorageAccountSkuConversionStatus,GeoReplicationStats `
+ -ExpandProperty Sku `
+ | Select-Object Location,PrimaryLocation,SecondaryLocation,StatusOfPrimary,StatusOfSecondary,AccountType,AccessTier,@{E={$_.Name};L="RedundancyType"},LastGeoFailoverTime,FailoverInProgress,StorageAccountSkuConversionStatus `
+ -ExpandProperty GeoReplicationStats `
+ | fl
+```
+
+### Initiate a failover of the storage account with PowerShell
+
+```powershell
+Invoke-AzStorageAccountFailover `
+ -ResourceGroupName $rgName `
+ -Name $saName `
+ -FailoverType <planned|unplanned> # Specify "planned" or "unplanned" failover (without the quotes)
+ ``` ## [Azure CLI](#tab/azure-cli)
-To use Azure CLI to initiate an account failover, call the following commands:
+To get the current redundancy and failover information for your storage account, and then initiate a failover, follow these steps:
+
+> [!div class="checklist"]
+> - [Install the Azure Storage preview extension for Azure CLI](#install-the-azure-storage-preview-extension-for-azure-cli)
+> - [Get the current status of the storage account with Azure CLI](#get-the-current-status-of-the-storage-account-with-azure-cli)
+> - [Initiate a failover of the storage account with Azure CLI](#initiate-a-failover-of-the-storage-account-with-azure-cli)
+
+### Install the Azure Storage preview extension for Azure CLI
+
+1. Install the latest version of the Azure CLI. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+1. Install the Azure CLI storage preview extension using the following command:
+
+ ```azurecli
+ az extension add -n storage-preview
+ ```
+
+ > [!IMPORTANT]
+ > The Azure CLI storage preview extension adds support for features or arguments that are currently in PREVIEW.
+ >
+ > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Get the current status of the storage account with Azure CLI
+
+Run the following command to get the current geo-replication information for the storage account. Replace the placeholder values in angle brackets (**\<\>**) with your own values:
+
+```azurecli
+az storage account show \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --expand geoReplicationStats
+```
+
+For more information about the `storage account show` command, run:
+
+```azurecli
+az storage account show --help
+```
+
+### Initiate a failover of the storage account with Azure CLI
+
+Run the following command to initiate a failover of the storage account. Replace the placeholder values in angle brackets (**\<\>**) with your own values:
-```azurecli-interactive
-az storage account show \ --name accountName \ --expand geoReplicationStats
-az storage account failover \ --name accountName
+```azurecli
+az storage account failover \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --failover-type <planned|unplanned>
+```
+
+For more information about the `storage account failover` command, run:
+
+```azurecli
+az storage account failover --help
```
-## Important implications of account failover
+## Monitor the failover
+
+You can monitor the status of the failover using the Azure portal, PowerShell, or the Azure CLI.
+
+## [Portal](#tab/azure-portal)
+
+The status of the failover is shown in the Azure portal in **Notifications**, in the activity log, and on the **Redundancy** page of the storage account.
+
+### Notifications
+
+To check the status of the failover, select the notification icon (bell) on the far right of the Azure portal global page header:
++
+### Activity log
-When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the potential impact to your storage account before you initiate a failover.
+To view the detailed status of a failover, select the **More events in the activity log** link in the notification, or go to the **Activity log** page of the storage account:
-To estimate the extent of likely data loss before you initiate a failover, check the **Last Sync Time** property. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
-The time it takes to failover after initiation can vary though typically less than one hour.
+### Redundancy page
-After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. For additional information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+Messages on the redundancy page of the storage account will show if the failover is still in progress:
-After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
-- The number and size of the objects in the storage account. Many small objects can take longer than fewer and larger objects.-- The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.-- If using Blob storage, the number of snapshots per blob.-- If using Table storage, the [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage). The replication process can't scale beyond the number of partition keys that you use.
+If the failover is nearing completion, the redundancy page might show the original secondary region as the new primary, but still display a message indicating the failover is in progress:
++
+When the failover is complete, the redundancy page will show the last failover time and the location of the new primary region. If a planned failover was done, the new secondary region will also be displayed. The following image shows the new storage account status after a failover resulting from an outage of the endpoints for the original primary (unplanned):
++
+## [PowerShell](#tab/azure-powershell)
+
+You can use Azure PowerShell to get the current redundancy and failover information for your storage account. To check the status of the storage account failover see [Get the current status of the storage account with PowerShell](#get-the-current-status-of-the-storage-account-with-powershell).
+
+## [Azure CLI](#tab/azure-cli)
+
+You can use Azure PowerShell to get the current redundancy and failover information for your storage account. To check the status of the storage account failover see [Get the current status of the storage account with Azure CLI](#get-the-current-status-of-the-storage-account-with-azure-cli).
++
-## Next steps
+## See also
- [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md) - [Check the Last Sync Time property for a storage account](last-sync-time-get.md)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 09/06/2023 Last updated : 01/05/2024
# Azure Storage redundancy
-Azure Storage always stores multiple copies of your data so that it's protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
+Azure Storage always stores multiple copies of your data to protect it from planned and unplanned events. Examples of these events include transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose include:
When deciding which redundancy option is best for your scenario, consider the tr
The services that comprise Azure Storage are managed through a common Azure resource called a *storage account*. The storage account represents a shared pool of storage that can be used to deploy storage resources such as blob containers (Blob Storage), file shares (Azure Files), tables (Table Storage), or queues (Queue Storage). For more information about Azure Storage accounts, see [Storage account overview](storage-account-overview.md).
-The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. You may want to isolate different types of resources in separate storage accounts if they have different redundancy requirements.
+The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. Consider isolating different types of resources in separate storage accounts if they have different redundancy requirements.
## Redundancy in the primary region
Data in an Azure Storage account is always replicated three times in the primary
Locally redundant storage (LRS) replicates your storage account three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
-LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
+LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS might be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
A write request to a storage account that is using LRS happens synchronously. The write operation returns successfully only after the data is written to all three replicas.
The following diagram shows how your data is replicated within a single data cen
LRS is a good choice for the following scenarios: -- If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS.-- If your application is restricted to replicating data only within a country or region due to data governance requirements, you may opt for LRS. In some cases, the paired regions across which the data is geo-replicated may be in another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).-- If your scenario is using Azure unmanaged disks, you may opt for LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
+- If your application stores data that can be easily reconstructed if data loss occurs, consider choosing LRS.
+- If your application is restricted to replicating data only within a country or region due to data governance requirements, consider choosing LRS. In some cases, the paired regions across which the data is geo-replicated might be within another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).
+- If your scenario is using Azure unmanaged disks, consider using LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
### Zone-redundant storage Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
-With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS repointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
+With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS repointing. These updates could affect your application if you access data before the updates are complete. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns successfully only after the data is written to all replicas across the three availability zones. If an availability zone is temporarily unavailable, the operation returns successfully after the data is written to all available zones.
The following diagram shows how your data is replicated across availability zone
:::image type="content" source="media/storage-redundancy/zone-redundant-storage.png" alt-text="Diagram showing how data is replicated in the primary region with ZRS":::
-ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
+ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself might not fully protect your data against a regional disaster where multiple zones are permanently affected. [Geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS) uses ZRS in the primary region and also geo-replicates your data to a secondary region. GZRS is available in many regions, and is recommended for protection against regional disasters.
The archive tier for Blob Storage isn't currently supported for ZRS, GZRS, or RA-GZRS accounts. Unmanaged disks don't support ZRS or GZRS.
For more information about which regions support ZRS, see [Azure regions with av
ZRS is supported for all Azure Storage services through standard general-purpose v2 storage accounts, including: -- Azure Blob storage (hot and cool block blobs and append blobs, non-disk page blobs)
+- Azure Blob storage (hot and cool block blobs and append blobs, nondisk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
For a list of regions that support zone-redundant storage (ZRS) for managed disk
## Redundancy in a secondary region
-For applications requiring high durability, you can choose to additionally copy the data in your storage account to a secondary region that is hundreds of miles away from the primary region. If your storage account is copied to a secondary region, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable.
+Redundancy options can help provide high durability for your applications. In many regions, you can copy the data within your storage account to a secondary region located hundreds of miles away from the primary region. Copying your storage account to a secondary region ensures that your data remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable.
When you create a storage account, you select the primary region for the account. The paired secondary region is determined based on the primary region, and can't be changed. For more information about regions supported by Azure, see [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
Azure Storage offers two options for copying your data to a secondary region:
With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there's a failover to the primary region. For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information, see [Read access to data in the secondary region](#read-access-to-data-in-the-secondary-region).
-If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover completes, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
> [!IMPORTANT] > Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. The Azure Storage platform typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region.
The following diagram shows how your data is replicated with GRS or RA-GRS:
### Geo-zone-redundant storage
-Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region and is also replicated to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
+Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region. In addition, it's also replicated to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
-With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data is also durable in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year.
+With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data also remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year.
The following diagram shows how your data is replicated with GZRS or RA-GZRS: :::image type="content" source="media/storage-redundancy/geo-zone-redundant-storage.png" alt-text="Diagram showing how data is replicated with GZRS or RA-GZRS":::
-Only standard general-purpose v2 storage accounts support GZRS. GZRS is supported by all of the Azure Storage services, including:
+Only standard general-purpose v2 storage accounts support GZRS. All Azure Storage services support GZRS, including:
-- Azure Blob storage (hot and cool block blobs, non-disk page blobs)
+- Azure Blob storage (hot and cool block blobs, nondisk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
For a list of regions that support geo-zone-redundant storage (GZRS), see [Azure
## Read access to data in the secondary region
-Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region is not directly accessible to users or applications, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
+Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region isn't directly accessible to users or applications when an outage occurs in the primary region, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the storage service endpoints in the secondary region become the new primary endpoints for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md).
If your applications require high availability, then you can configure your storage account for read access to the secondary region. When you enable read access to the secondary region, then your data is always available to be read from the secondary, including in a situation where the primary region becomes unavailable. Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS) configurations permit read access to the secondary region.
If your applications require high availability, then you can configure your stor
If your storage account is configured for read access to the secondary region, then you can design your applications to seamlessly shift to reading data from the secondary region if the primary region becomes unavailable for any reason.
-The secondary region is available for read access after you enable RA-GRS or RA-GZRS, so that you can test your application in advance to make sure that it will properly read from the secondary in the event of an outage. For more information about how to design your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
+The secondary region is available for read access after you enable RA-GRS or RA-GZRS. This availability allows you to test your application in advance to ensure that it reads properly from the secondary region during an outage. For more information about how to design your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
-When read access to the secondary is enabled, your application can be read from the secondary endpoint as well as from the primary endpoint. The secondary endpoint appends the suffix *-secondary* to the account name. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
+When read access to the secondary is enabled, your application can be read from both the secondary and primary endpoints. The secondary endpoint appends the suffix *-secondary* to the account name. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
#### Plan for data loss
-Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster were to strike the primary region, it's likely that some data would be lost and that files within a directory or container would not be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster strikes the primary region, it's likely that some data would be lost and that files within a directory or container wouldn't be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
## Summary of redundancy options
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-|
-| Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
+| Percent durability of objects over a given year | at least 99.999999999%<br/>(11 9's) | at least 99.9999999999%<br/>(12 9's) | at least 99.99999999999999%<br/>(16 9's) | at least 99.99999999999999%<br/>(16 9's) |
| Availability for read requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for cool or archive access tiers) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br/><br/>At least 99.99% (99.9% for cool access tier) for RA-GZRS | | Availability for write requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | | Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
The following table indicates whether your data is durable and available in a gi
| Outage scenario | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | A node within a data center becomes unavailable | Yes | Yes | Yes | Yes |
-| An entire data center (zonal or non-zonal) becomes unavailable | No | Yes | Yes<sup>1</sup> | Yes |
+| An entire data center (zonal or nonzonal) becomes unavailable | No | Yes | Yes<sup>1</sup> | Yes |
| A region-wide outage occurs in the primary region | No | No | Yes<sup>1</sup> | Yes<sup>1</sup> | | Read access to the secondary region is available if the primary region becomes unavailable | No | No | Yes (with RA-GRS) | Yes (with RA-GZRS) |
The following table indicates whether your data is durable and available in a gi
### Supported Azure Storage services
-The following table shows which redundancy options are supported by each Azure Storage service.
+The following table shows the redundancy options supported by each Azure Storage service.
| Service | LRS | ZRS | GRS | RA-GRS | GZRS | RA-GZRS | ||--|--|--|--|||
Unmanaged disks don't support ZRS or GZRS.
For pricing information for each redundancy option, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). > [!NOTE]
-Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
+> Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
## Data integrity
storage Storage Use Azcopy Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md
Last updated 11/03/2023 + # Authorize access to blobs and files with AzCopy and Microsoft Entra ID
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
If you intend to use Azure Elastic SAN Preview or Azure Disks as backing storage
If you already have an AKS cluster deployed, skip this section and go to [Install Azure Container Storage on an existing AKS cluster](#install-azure-container-storage-on-an-existing-aks-cluster).
-Run the following command to create a new AKS cluster, install Azure Container Storage, and create a storage pool. Replace `<cluster-name>` and `<resource-group-name>` with your own values, and specify which VM type you want to use. You'll need a node pool of at least three Linux VMs. Replace `<storage-pool-type>` with `azureDisk`, `ephemeraldisk`, or `elasticSan`.
+Run the following command to create a new AKS cluster, install Azure Container Storage, and create a storage pool. Replace `<cluster-name>` and `<resource-group-name>` with your own values, and specify which VM type you want to use. You'll need a node pool of at least three Linux VMs. Replace `<storage-pool-type>` with `azureDisk`, `ephemeralDisk`, or `elasticSan`.
Optional storage pool parameters:
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
Title: Install Azure Container Storage Preview for use with Azure Kubernetes Service (AKS)
+ Title: Tutorial - Install Azure Container Storage Preview for use with Azure Kubernetes Service (AKS)
description: Learn how to install Azure Container Storage Preview for use with Azure Kubernetes Service. Create an AKS cluster, label the node pool, and install the Azure Container Storage extension. - Previously updated : 11/07/2023+ Last updated : 01/08/2024
-# Install Azure Container Storage Preview for use with Azure Kubernetes Service
+# Tutorial: Install Azure Container Storage Preview for use with Azure Kubernetes Service
-[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to create an [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster, label the node pool, and install Azure Container Storage Preview on the cluster. Alternatively, you can install Azure Container Storage Preview [using a QuickStart](container-storage-aks-quickstart.md) instead of following the manual steps in this article.
+[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. In this tutorial, you'll create an [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster and install Azure Container Storage Preview on the cluster. Alternatively, you can install Azure Container Storage Preview [using a QuickStart](container-storage-aks-quickstart.md) instead of following the manual steps in this tutorial.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+> [!div class="checklist"]
+> * Create a resource group
+> * Choose a data storage option and VM type
+> * Create an AKS cluster
+> * Connect to the cluster
+> * Label the node pool
+> * Assign Contributor role to AKS managed identity
+> * Install Azure Container Storage extension
## Prerequisites
## Getting started -- Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN Preview as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
+* Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN Preview as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
-- [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+* [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
-- If you're using Azure Cloud Shell, you might be prompted to mount storage. Select the Azure subscription where you want to create the storage account and select **Create**.
+* If you're using Azure Cloud Shell, you might be prompted to mount storage. Select the Azure subscription where you want to create the storage account and select **Create**.
## Set subscription context
Before you create your cluster, you should understand which back-end storage opt
### Data storage options -- **[Azure Elastic SAN Preview](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
+* **[Azure Elastic SAN Preview](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
-- **[Azure Disks](../../virtual-machines/managed-disks-overview.md)**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
+* **[Azure Disks](../../virtual-machines/managed-disks-overview.md)**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
-- **Ephemeral Disk**: This option uses local NVMe drives on the AKS nodes and is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. AKS discovers the available ephemeral storage on AKS nodes and acquires the drives for volume deployment.
+* **Ephemeral Disk**: This option uses local NVMe drives on the AKS nodes and is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. AKS discovers the available ephemeral storage on AKS nodes and acquires the drives for volume deployment.
### VM types To use Azure Container Storage, you'll need a node pool of at least three Linux VMs. Each VM should have a minimum of four virtual CPUs (vCPUs). Azure Container Storage will consume one core for I/O processing on every VM the extension is deployed to.
-If you intend to use Azure Elastic SAN Preview or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes.
+If you intend to use Azure Elastic SAN Preview or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes.
If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives.
Congratulations, you've successfully installed Azure Container Storage. You now
Now you can create a storage pool and persistent volume claim, and then deploy a pod and attach a persistent volume. Follow the steps in the appropriate how-to article. -- [Use Azure Container Storage Preview with Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md)-- [Use Azure Container Storage Preview with Azure Disks](use-container-storage-with-managed-disks.md)-- [Use Azure Container Storage with Azure Ephemeral disk (NVMe)](use-container-storage-with-local-disk.md)
+* [Use Azure Container Storage Preview with Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md)
+* [Use Azure Container Storage Preview with Azure Disks](use-container-storage-with-managed-disks.md)
+* [Use Azure Container Storage with Azure Ephemeral disk (NVMe)](use-container-storage-with-local-disk.md)
storage File Sync Server Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md
Title: Manage registered servers with Azure File Sync
description: Learn how to register and unregister a Windows Server with an Azure File Sync Storage Sync Service. + Last updated 10/04/2023
storage Files Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-disaster-recovery.md
Write access is restored for geo-redundant accounts once the DNS entry has been
> [!IMPORTANT] > After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint/region. To resume replication to the new secondary, configure the account for geo-redundancy again. >
-> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [Important implications of account failover](../common/storage-initiate-account-failover.md#important-implications-of-account-failover).
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](../common/storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
### Anticipate data loss
storage Storage Files Migration Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nfs.md
Title: Migrate to NFS Azure file shares
-description: Learn how to migrate from Linux file servers to NFS Azure file shares using open source file copy tools. Compare the performance of common file copy tools.
+ Title: Migrate to NFS Azure file shares from Linux
+description: Learn how to migrate from Linux file servers to NFS Azure file shares using recommended open source file copy tools. Compare the performance of file copy tools fpsync and rsync.
Previously updated : 12/19/2023 Last updated : 01/08/2023
synapse-analytics Apache Spark 34 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-34-runtime.md
+ Last updated 11/17/2023
The following table lists all the default level packages for R and their respect
## Migration between Apache Spark versions - support For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).---
virtual-desktop App Attach Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-setup.md
Title: Add and manage MSIX app attach and app attach applications - Azure Virtual Desktop description: Learn how to add and manage applications with MSIX app attach and app attach in Azure Virtual Desktop using the Azure portal and Azure PowerShell, where you can dynamically attach applications from an application package to a user session. + zone_pivot_groups: azure-virtual-desktop-app-attach
virtual-desktop Configure Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-device-redirections.md
Title: Configure device redirection - Azure
description: How to configure device redirection for Azure Virtual Desktop. Previously updated : 11/14/2023 Last updated : 01/08/2024
Set the following RDP property to configure WebAuthn redirection:
When enabled, WebAuthn requests from the session are sent to the local PC to be completed using the local Windows Hello for Business or security devices like FIDO keys. For more information, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication).
-## Disable drive redirection
+## Disable redirection on the local device
-If you're making RDP connections from personal resources to corporate ones on the Terminal Server or Windows Desktop clients, you can disable drive redirection for security purposes. To disable drive redirection:
+If you're connecting from personal resources to corporate ones using the Windows Desktop clients, you can disable drive, printer, and clipboard redirection on your local device for security purposes by overriding the configuration from your administrator.
-1. Open the **Registry Editor (regedit)**.
+### Disable drive redirection
-2. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **Terminal Server Client**.
+To disable drive redirection:
-3. Create the following registry key:
+1. Open the **Registry Editor (regedit)**.
- - **Key**: HKLM\\Software\\Microsoft\\Terminal Server Client
- - **Type**: REG_DWORD
- - **Name**: DisableDriveRedirection
+1. Go to the following registry key and create or set the value:
-4. Set the value of the registry key to **0**.
+ - **Key**: `HKLM\Software\Microsoft\Terminal Server Client`
+ - **Type**: `REG_DWORD`
+ - **Value name**: `DisableDriveRedirection`
+ - **Value data**: `1`
-## Disable printer redirection
+### Disable printer redirection
-If you're making RDP connections from personal resources to corporate ones on the Terminal Server or Windows Desktop clients, you can disable printer redirection for security purposes. To disable printer redirection:
+To disable printer redirection:
1. Open the **Registry Editor (regedit)**.
-1. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **Terminal Server Client**.
-
-1. Create the following registry key:
+1. Go to the following registry key and create or set the value:
- - **Key**: HKLM\\Software\\Microsoft\\Terminal Server Client
- - **Type**: REG_DWORD
- - **Name**: DisablePrinterRedirection
+ - **Key**: `HKLM\Software\Microsoft\Terminal Server Client`
+ - **Type**: `REG_DWORD`
+ - **Value name**: `DisablePrinterRedirection`
+ - **Value data**: `1`
-1. Set the value of the registry key to **0**.
+### Disable clipboard redirection
-## Disable clipboard redirection
-
-If you're making RDP connections from personal resources to corporate ones on the Terminal Server or Windows Desktop clients, you can disable clipboard redirection for security purposes. To disable clipboard redirection:
+To disable clipboard redirection:
1. Open the **Registry Editor (regedit)**.
-1. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **Terminal Server Client**.
-
-1. Create the following registry key:
-
- - **Key**: HKLM\\Software\\Microsoft\\Terminal Server Client
- - **Type**: REG_DWORD
- - **Name**: DisableClipboardRedirection
+1. Go to the following registry key and create or set the value:
-1. Set the value of the registry key to **0**.
+ - **Key**: `HKLM\Software\Microsoft\Terminal Server Client`
+ - **Type**: `REG_DWORD`
+ - **Value name**: `DisableClipboardRedirection`
+ - **Value data**: `1`
## Next steps
virtual-desktop Publish Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/publish-applications.md
Title: Publish applications with RemoteApp in Azure Virtual Desktop portal - Azu
description: How to publish applications with RemoteApp in Azure Virtual Desktop using the Azure portal and Azure PowerShell. + Last updated 12/08/2023
virtual-desktop Troubleshoot Client Windows Basic Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-basic-shared.md
There are a few basic troubleshooting steps you can try if you're having issues
1. Make sure you're connected to the internet.
-1. Make sure your dev box is running. For more information, see [Shutdown, restart or start a dev box](../dev-box/how-to-create-dev-boxes-developer-portal.md#shutdown-restart-or-start-a-dev-box).
+1. Make sure your dev box is running. For more information, see [Shutdown, restart or start a dev box](../dev-box/how-to-create-dev-boxes-developer-portal.md#shut-down-restart-or-start-a-dev-box).
1. Try to connect to your dev box from the Dev Box developer portal. For more information, see [Connect to a dev box](../dev-box/quickstart-create-dev-box.md#connect-to-a-dev-box).
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv2-series.md
Example confidential use cases include: databases, blockchain, multiparty data a
[Turbo Boost Max 3.0](https://www.intel.com/content/www/us/en/gaming/resources/turbo-boost.html): Supported (Tenant VM will report 3.7 GHz, but will reach Turbo Speeds)<br> [Hyper-Threading](https://www.intel.com/content/www/us/en/gaming/resources/hyper-threading.html): Not Supported<br>
-[Premium Storage](premium-storage-performance.md): Supported (Not Supported for Standard_DC8_v2)<br>
+[Premium Storage](premium-storage-performance.md): Supported<br>
[Premium Storage Caching](premium-storage-performance.md): Supported<br> [Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
virtual-machines Vmaccess Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess-linux.md
Last updated 04/12/2023-+ # VMAccess Extension for Linux
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-troubleshooting.md
Last updated 08/06/2019-+ # Azure Disk Encryption for Linux VMs troubleshooting guide
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
Last updated 09/15/2023 -+ # Change the size of a virtual machine
virtual-machines Centos End Of Life https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/centos/centos-end-of-life.md
description: Understand your options for moving CentOS workloads
-+ Last updated 12/1/2023
If you're moving to another distribution, you need to redeploy your Virtual Mach
The end-of-life moment for CentOS may also be an opportunity for you to consider modernizing your workload, move to a PaaS, SaaS or containerized solution. [What is Application Modernization? | Microsoft Azure](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-application-modernization/)-
virtual-machines Oracle Weblogic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-weblogic.md
Last updated 10/24/2023 -+ # What are solutions for running Oracle WebLogic Server on Azure Virtual Machines?
virtual-machines Weblogic Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/weblogic-aks.md
Last updated 10/24/2023 -+ # What are solutions for running Oracle WebLogic Server on the Azure Kubernetes Service?
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
integrations: |-
Advanced metrics are optional, and turning them on automatically turns on basic metrics collection. Advanced metrics currently include only `Network Policy Manager_ipset_counts`.
-Learn more about [Azure Monitor for containers collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md).
+Learn more about [Azure Monitor for containers collection settings in config map](../azure-monitor/containers/container-insights-data-collection-configmap.md).
### Visualization options for Azure Monitor
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Title: 'Azure Virtual WAN FAQ'
description: See answers to frequently asked questions about Azure Virtual WAN networks, clients, gateways, devices, partners, and connections. + Last updated 10/30/2023
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
Last updated 11/20/2023 -+ ms.devlang: azurecli- # About VPN Gateway configuration settings