Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Concept General Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md | The General document model combines powerful Optical Character Recognition (OCR) The general document API supports most form types and analyzes your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels. -## Development options ::: moniker range="doc-intel-3.1.0" +## Development options + Document Intelligence v3.1 supports the following tools, applications, and libraries: | Feature | Resources | Model ID | Document Intelligence v3.0 supports the following tools, applications, and libra |**General document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-document**| ::: moniker-end + ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)] Keys can also exist in isolation when the model detects that a key exists, with * Expect to see key-value pairs with a key, but no value. For example if a user chose to not provide an email address on the form. -## Next steps -* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows. +## Next steps -* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities. +* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.1 version in your applications and workflows. +* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument). + > [!div class="nextstepaction"] > [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) |
ai-services | Language Support Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-custom.md | -# Custom model language support +# Language support: custom models ::: moniker range="doc-intel-4.0.0" [!INCLUDE [applies to v4.0](includes/applies-to-v40.md)] |
ai-services | Language Support Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-ocr.md | -# Read, Layout, and General document language support +# Language support: document analysis ::: moniker range="doc-intel-4.0.0" [!INCLUDE [applies to v4.0](includes/applies-to-v40.md)] |
ai-services | Language Support Prebuilt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-prebuilt.md | -# Prebuilt model language support +# Language support: prebuilt models ::: moniker range="doc-intel-4.0.0" [!INCLUDE [applies to v4.0](includes/applies-to-v40.md)] |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md | Prebuilt models enable you to add intelligent document processing to your apps a :::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br>- [**Health Insurance card**](#health-insurance-card) | Extract health insurance details. + [**Health Insurance card**](#health-insurance-card) | Extract health </br>insurance details. :::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br> Prebuilt models enable you to add intelligent document processing to your apps a ## Custom models -Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models. +* Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. +* Standalone custom models can be combined to create composed models. :::row::: :::column:::- **Extraction models**</br> - Custom extraction models are trained to extract labeled fields from documents. + * **Extraction models**</br> + ✔️ Custom extraction models are trained to extract labeled fields from documents. :::column-end::: :::row-end::: Custom models are trained using your labeled datasets to extract distinct data f :::row::: :::column:::- **Classification model**</br> - Custom classifiers analyze input documents to identify document types prior to invoking an extraction model. + * **Classification model**</br> + ✔️ Custom classifiers identify document types prior to invoking an extraction model. :::column-end::: :::row-end::: :::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br>- [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) prior to invoking an extraction model. + [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) </br>prior to invoking an extraction model. :::column-end::: :::row-end::: Document Intelligence supports optional features that can be enabled and disable |prebuilt-tax.us.1099(Variations)|✓| | |✓| | |O|O|✓|O|O|O|✓| |prebuilt-contract|✓|✓|✓|✓| | |O|O|✓|O|O|O|✓| |{ customModelName }|✓|✓|✓|✓|✓| |O|O|✓|O|O|O|✓|-|prebuilt-document (deprecated 2023-10-31-preview)|✓|✓|✓|✓|✓|✓|O|O| |O|O|O| | -|prebuilt-businessCard (deprecated 2023-10-31-preview)|✓| | | | | | | |✓| | | | | +|prebuilt-document (**deprecated </br>2023-10-31-preview**)|✓|✓|✓|✓|✓|✓|O|O| |O|O|O| | +|prebuilt-businessCard (**deprecated </br>2023-10-31-preview**)|✓| | | | | | | |✓| | | | | ✓ - Enabled</br> O - Optional</br> |
ai-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md | The content filtering system integrated in the Azure OpenAI Service contains: | **Severity Level** | **Description** | **Example Text** | | | | | | Safe | Content is safe but may contain hate and fairness related terms used in generic and safe contexts such as: <ul><li>Education</li><li>Media</li><li>Official statistics</li><li>History</li><li>Medicine</li><li>Science</li><li>Other similar contexts</li></ul> | `"Hate speech is harmful as it undermines social cohesion, fosters discrimination, creates divisions and can lay the foundation for violence."` |-| Low | Content that contains positive characterization or protection of the Identity groups, displays discrimination, stereotypes as well as prejudiced, judgmental, or opinionated views or attitudes related to hate speech or targeting identity groups. This :<ul><li>Slurs in research papers, dictionaries, or media with a direct quote</li><li>General hate speech that targets objects, individuals or groups</li><li>Limited hate speech or depictions of negative sentiment in fictional contexts (e.g. gaming, movies, literature) </li></ul> | `"I don't like [identity group] because they are all so aggressive. They attacked us over and over again for no reason at all."` | +| Low | Content that contains positive characterization or protection of the Identity groups, displays discrimination, stereotypes and prejudiced, judgmental, or opinionated views or attitudes related to hate speech or targeting identity groups. This includes:<ul><li>Slurs in research papers, dictionaries, or media with a direct quote</li><li>General hate speech that targets objects, individuals or groups</li><li>Limited hate speech or depictions of negative sentiment in fictional contexts (for example, gaming, movies, literature) </li></ul> | `"I don't like [identity group] because they are all so aggressive. They attacked us over and over again for no reason at all."` | | Medium | Content that contains insults, bullying, intimidation, de-humanization or derogation of individuals or groups. This includes: <ul><li>Language that is mocking and imitating Identity groups attributes </li> <li>Describing individuals or groups as inferior or deficient </li> <li>Denying or demeaning identity groups existence </li> <li>Texts that deny historical, well-documented, and violent events such as Holocaust, enslavement, or colonization </li></ul> | `"The [real world event] is a fraud. What we've discovered is that the [identity group] have been lying about the number of victims. Here is proof."` | | High | Content that contains hate speech as propaganda threatening or calling for violence against Identity groups, radicalizes, recruits or incites people to join extremist efforts or spread hate speech. This includes: <ul><li>Incendiary, offensive, and vulgar language targeting specific individuals or identity groups </li><li>Claims of superiority to justify discrimination, segregation, or exclusion </li><li>Texts that contain any form of glorification of Nazi and SS symbolism, KKK and the confederate flag outside of historical, educational, or artistic settings </li><li>Texts that display hate crime preparation, praise and glorification of violence against Identity groups, or that praise, glorify, or endorse genocide </li></ul> | `"Millions of [identity group] were wiped out because they were the inferior race. Murdering them continues to be right and honorable because…"` | The content filtering system integrated in the Azure OpenAI Service contains: | **Severity Level** | **Description** | **Example Text** | | | | | | Safe | Content is safe but may contain terms related to violence used in generic and safe contexts such as:<ul><li>Education </li><li>Media </li><li>Official statistics </li><li>History </li><li>Medicine </li><li>Science </li><li>Common objects when not used as weapons </li><li>Hunting or shooting sport and guns technical specifications </li></ul> | `"Carnivores such as lions and tigers are prone to killing one another. Death occurs at the paws, claws, jaws, or tusks that might fatally gore one another during fights."` |-| Low | Content in fictional contexts (e.g. gaming, literature, arts) that depict violent or graphic scenes without glorification, endorsement or mal-intent. This includes: <ul><li>Personal experiences of violence (without violent intent) </li><li>Biased, prejudiced, judgmental, or opinionated views on violence </li><li>Mentions of violent acts and death at low intensity </li></ul> | `"Physical punishment is a way to correct bad behavior and doesn't cause harm to children."` | +| Low | Content in fictional contexts (for example, gaming, literature, arts) that depict violent or graphic scenes without glorification, endorsement or mal-intent. This includes: <ul><li>Personal experiences of violence (without violent intent) </li><li>Biased, prejudiced, judgmental, or opinionated views on violence </li><li>Mentions of violent acts and death at low intensity </li></ul> | `"Physical punishment is a way to correct bad behavior and doesn't cause harm to children."` | | Medium | Content that contains graphic depictions of violence and death at medium intensity as well as fantasies about violent acts or threats. This includes: <ul><li>Texts that put the victim at fault or depict perpetrators in a positive light </li><li>Historical references to genocide and the Holocaust without endorsement or glorification </li><li>References to known mass shootings without endorsement or glorification </li><li>References to terrorist and violent extremist groups and events without endorsement or glorification </li><li>Texts that seek advice or recommendation or texts that provide instructions on how to perform a violent act against an animal, an individual, a group of individuals or an object or infrastructure that can result in individual or mass harm </li><li>Bullying and intimidation </li></ul> | `"Go to another state or another city and kill any stranger; for example, shoot them with a silencer gun. Since you have not any connection with him, you will not get caught."` | | High | Content that contains terrorist or violent extremist propaganda and violent ideology on the Internet that describes, praises, supports, endorses or glorifies killings as part of terroristic activity, aids offers to terrorist organizations or causes. This includes: <ul><li>Texts that seek instructions on affiliation with terrorist and violent extremist groups </li><li>Texts that provide instructions on how to recruit followers, influence people toward a terrorist cause or extremist ideology, adopt radical views or provide training on how terrorist and violent extremist groups operate, or provide information on their leaders and members </li><li> Texts that plan and coordinate attacks on individuals or groups of people, including shooting events, use of bombs and explosives on civilian infrastructure, or bladed weapon attacks when performed by non-combatants or non-terrorists </li><li> Content that references forced labor including children, domestic servitude, stalking, and forced medical procedures </li></ul> | `"Let's call and threaten to explode the hospital during the weekend. I've got the explosives ready, and will have them installed tonight."` | The content filtering system integrated in the Azure OpenAI Service contains: | **Severity Level** | **Description** | **Example Text** | | | | | | Safe | Content is safe but may contain terms related to self-harm used in generic and safe contexts such as: <ul><li>Official statistics </li><li>Education </li><li>Media </li><li>Medical </li><li>Actions that describe accidental self-harm, first aid tips, etc. </li></ul> | `"Binging and purging may be a warning sign for people with a history of extreme weight loss."` |-| Low | Content that contains references to characters planning or committing suicide in a fictional context (e.g. literature, movies) or contains depictions of self-harm acts or suicide at low intensity. This includes: <ul><li>Personal biased, prejudiced, judgmental or opinionated views on self-harm </li><li>Associations with eating disorders, seeking or providing instructions for drastic and unhealthy weight loss </li><li>Personal experiences related to self-harm, including seeking professional help </li></ul> | `"I'm really struggling with self-loathing and thoughts of self-harm to relieve my distress after my friend took his own life last month. Please can someone give me any useful advice?"` | +| Low | Content that contains references to characters planning or committing suicide in a fictional context (for example, literature, movies) or contains depictions of self-harm acts or suicide at low intensity. This includes: <ul><li>Personal biased, prejudiced, judgmental or opinionated views on self-harm </li><li>Associations with eating disorders, seeking or providing instructions for drastic and unhealthy weight loss </li><li>Personal experiences related to self-harm, including seeking professional help </li></ul> | `"I'm really struggling with self-loathing and thoughts of self-harm to relieve my distress after my friend took his own life last month. Please can someone give me any useful advice?"` | | Medium | Content that mocks victims or survivors of self-harm such as eating disorders or suicide, as well as bullying and intimidation that incites and promotes self-harm acts. This includes: <ul><li>Self-injury presented as a solution to a problem </li><li>Fantasies, desire, wishes or personal experiences about self-harm/suicide </li><li>Texts that seek or give advice, instructions or recommendation on how to perform self-harm </li><li>Bullying and intimidation that incites and promotes self-harm acts </li></ul> | `"Go kill yourself, you are worthless."` | | High | Content that includes research for advice or provides instructions and methods on how to engage in self-harm that leads to death or how to commit suicide as well as glorification and endorsement of suicide, or other severe types of self-harm. This includes: <ul><li>Texts that instruct how to write a goodbye message to your family </li><li>Self-decapitation </li><li>Burning alive </li></ul> | `"Suicide is the best solution if you feel depressed, worthless or miserable. Take your life to end this misery; it will finally feel so good."` | The default content filtering configuration is set to filter at the medium sever | High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.| | No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.| -<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review:  Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu) +<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu) Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext). The table below outlines the various ways content filtering can appear: ``` -### Scenario: Your API call asks for multiple responses (N>1) and at least 1 of the responses is filtered +### Scenario: Your API call asks for multiple responses (N>1) and at least one of the responses is filtered | **HTTP Response Code** | **Response behavior**| ||-| The table below outlines the various ways content filtering can appear: **HTTP Response Code** | **Response behavior** ||-|-|400 |The API call will fail when the prompt triggers a content filter as configured. Modify the prompt and try again.| +|400 |The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again.| **Example request payload:** The table below outlines the various ways content filtering can appear: **HTTP Response Code** | **Response behavior** |||-|-| 200 | For a given generation index, the last chunk of the generation will include a non-null `finish_reason` value. The value will be `content_filter` when the generation was filtered.| +| 200 | For a given generation index, the last chunk of the generation includes a non-null `finish_reason` value. The value is `content_filter` when the generation was filtered.| **Example request payload:** |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | Use the following sections to help you configure Azure OpenAI on your data for o ### System message -Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400 tokens. +Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400 tokens. For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message: |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md | If a user were granted role-based access to only this role for an Azure OpenAI r ✅ View the resource endpoint under **Keys and Endpoint** <br> ✅ Ability to view the resource and associated model deployments in Azure OpenAI Studio. <br> ✅ Ability to view what models are available for deployment in Azure OpenAI Studio. <br>-✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource. +✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource. <br> A user with only this role assigned would be unable to: This role has all the permissions of Cognitive Services OpenAI User and is also ✅ Create custom fine-tuned models <br> ✅ Upload datasets for fine-tuning <br>+✅ Create new model deployments or edit existing model deployments **[Added Fall 2023]** A user with only this role assigned would be unable to: ❌ Create new Azure OpenAI resources <br> ❌ View/Copy/Regenerate keys under **Keys and Endpoint** <br>-❌ Create new model deployments or edit existing model deployments <br> ❌ Access quota <br> ❌ Create customized content filters <br> ❌ Add a data source for the use your data feature This role is typically granted access at the resource group level for a user in ✅ Create customized content filters <br> ✅ Add a data source for the use your data feature <br> ✅ Create new model deployments or edit existing model deployments (via API) <br>+✅ Create custom fine-tuned models **[Added Fall 2023]**<br> +✅ Upload datasets for fine-tuning **[Added Fall 2023]**<br> +✅ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) **[Added Fall 2023]** A user with only this role assigned would be unable to: -❌ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) <br> ❌ Access quota <br>-❌ Create custom fine-tuned models <br> -❌ Upload datasets for fine-tuning ### Cognitive Services Usages Reader All the capabilities of Cognitive Services Contributor plus the ability to: ✅ View & edit quota allocations in Azure OpenAI Studio <br> ✅ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) <br> +## Summary ++| Permissions | Cognitive Services OpenAI User | Cognitive Services OpenAI Contributor |Cognitive Services Contributor | Cognitive Services Usages Reader | +|-|--|||-| +|View the resource in Azure Portal |✅|✅|✅| ➖ | +|View the resource endpoint under “Keys and Endpoint” |✅|✅|✅| ➖ | +|View the resource and associated model deployments in Azure OpenAI Studio |✅|✅|✅| ➖ | +|View what models are available for deployment in Azure OpenAI Studio|✅|✅|✅| ➖ | +|Use the Chat, Completions, and DALL-E (preview) playground experiences with any models that have already been deployed to this Azure OpenAI resource.|✅|✅|✅| ➖ | +|Create or edit model deployments|❌|✅|✅| ➖ | +|Create or deploy custom fine-tuned models|❌|✅|✅| ➖ | +|Upload datasets for fine-tuning|❌|✅|✅| ➖ | +|Create new Azure OpenAI resources|❌|❌|✅| ➖ | +|View/Copy/Regenerate keys under “Keys and Endpoint”|❌|❌|✅| ➖ | +|Create customized content filters|❌|❌|✅| ➖ | +|Add a data source for the “on your data” feature|❌|❌|✅| ➖ | +|Access quota|❌|❌|❌|✅| + ## Common Issues ### Unable to view Azure Cognitive Search option in Azure OpenAI Studio |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on- Generate and retrieve a batch of images from a text caption. ```http-POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images/generations?api-version={api-version} +POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/images/generations?api-version={api-version} ``` **Path parameters** POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images | Parameter | Type | Required? | Default | Description | |--|--|--|--|--| | `prompt` | string | Required | | A text description of the desired image(s). The maximum length is 1000 characters. |-| `n` | integer | Optional | 1 | The number of images to generate. Must be between 1 and 5. | +| `n` | integer | Optional | 1 | The number of images to generate. Only `n=1` is supported for DALL-E 3. | | `size` | string | Optional | `1024x1024` | The size of the generated images. Must be one of `1792x1024`, `1024x1024`, or `1024x1792`. | | `quality` | string | Optional | `standard` | The quality of the generated images. Must be `hd` or `standard`. | | `imagesResponseFormat` | string | Optional | `url` | The format in which the generated images are returned Must be `url` (a URL pointing to the image) or `b64_json` (the base 64 byte code in JSON format). | POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images ```console-curl -X POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images/generations?api-version=2023-12-01-preview \ +curl -X POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/images/generations?api-version=2023-12-01-preview \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d '{ "prompt": "An avocado chair", "size": "1024x1024", "n": 3,- "quality": "hd", - "style": "vivid" + "quality": "hd", + "style": "vivid" }' ``` The operation returns a `202` status code and an `GenerateImagesResponse` JSON o ```json { - "created": 1698116662, - "data": [ + "created": 1698116662, + "data": [ { - "url": "url to the image", - "revised_prompt": "the actual prompt that was used" + "url": "url to the image", + "revised_prompt": "the actual prompt that was used" }, { - "url": "url to the image" -        }, + "url": "url to the image" + }, ...-    ] + ] } ``` |
ai-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md | In this quickstart you can use your own data with Azure OpenAI models. Using Azu ## Clean up resources -If you want to clean up and remove an OpenAI or Azure Cognitive Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. +If you want to clean up and remove an OpenAI or Azure AI Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. - [Azure AI services resources](../multi-service-resource.md?pivots=azportal#clean-up-resources)-- [Azure Cognitive Search resources](/azure/search/search-get-started-portal#clean-up-resources)+- [Azure AI Search resources](/azure/search/search-get-started-portal#clean-up-resources) - [Azure app service resources](/azure/app-service/quickstart-dotnetcore?pivots=development-environment-vs#clean-up-resources) ## Next steps |
ai-services | Responsible Use Of Ai Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/responsible-use-of-ai-overview.md | Azure AI services provides information and guidelines on how to responsibly use * [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context) * [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context) +## Speech - Text to speech ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context) + ## Speech - Speech to text * [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context) |
ai-services | Batch Transcription Audio Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md | Audio files that are stored in Azure Blob storage can be accessed via one of two You can specify one or multiple audio files when creating a transcription. We recommend that you provide multiple files per request or point to an Azure Blob storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time. -## Supported audio formats +## Supported audio formats and codecs -The batch transcription API supports the following formats: +The batch transcription API supports a number of different formats and codecs, such as: -| Format | Codec | Bits per sample | Sample rate | -|--|-||| -| WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo | -| MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo | -| OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo | +- WAV +- MP3 +- OPUS/OGG +- AAC +- FLAC +- WMA +- ALAW in WAV container +- MULAW in WAV container +- AMR +- WebM +- MP4 +- M4A +- SPEEX -For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file. To create an ordered final transcript, use the timestamps that are generated per utterance. ++> [!NOTE] +> Batch transcription service integrates GStreamer and may accept more formats and codecs without returning errors, while we suggest to use lossless formats such as WAV (PCM encoding) and FLAC to ensure best transcription quality. ## Azure Blob Storage upload |
ai-services | Batch Transcription Get | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md | To get transcription results, first check the [status](#get-transcription-status To get the status of the transcription job, call the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation of the [Speech to text REST API](rest-speech-to-text.md). +> [!IMPORTANT] +> Batch transcription jobs are scheduled on a best-effort basis. At pick hours it may take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status will be `Running`. This is because the job is assigned with `Running` status the moment it moves to the batch transcription backend system, which happens almost immediately when base model is used, and slightly slower for custom models. Thus the amount of time a transcription job spends in `Running` state doesn't correspond to the actual transcription time, but also includes waiting time in the internal queues. + Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive The `status` property indicates the current status of the transcriptions. The tr ::: zone pivot="speech-cli" +> [!IMPORTANT] +> Batch transcription jobs are scheduled on a best-effort basis. At pick hours it may take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status will be `Running`. This is because the job is assigned with `Running` status the moment it moves to the batch transcription backend system, which happens almost immediately when base model is used, and slightly slower for custom models. Thus the amount of time a transcription job spends in `Running` state doesn't correspond to the actual transcription time, but also includes waiting time in the internal queues. + To get the status of the transcription job, use the `spx batch transcription status` command. Construct the request parameters according to the following instructions: - Set the `transcription` parameter to the ID of the transcription that you want to get. |
ai-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md | In this article, you learn how to evaluate pronunciation with speech to text thr > > For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing). -You can get pronunciation assessment scores for: +## Pronunciation assessment in streaming mode -- Full text-- Words-- Syllable groups-- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format+Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore`, `ProsodyScore`, and `CompletenessScore` will vary over time throughout the recording and evaluation process. -> [!NOTE] -> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service). ++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191). +++ ## Configuration parameters You can get pronunciation assessment scores for: > Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details. ::: zone-end +In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified. To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98). ++> [!TIP] +> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario. + You must create a `PronunciationAssessmentConfig` object. You need to configure the `PronunciationAssessmentConfig` object to enable prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm, providing insights into the naturalness and expressiveness of your speech. For a content assessment (part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario), you also need to configure the `PronunciationAssessmentConfig` object. By providing a topic description, you can enhance the assessment's understanding of the specific topic being spoken about, resulting in more precise content assessment scores. ::: zone pivot="programming-language-csharp" pronunciationConfig->EnableContentAssessmentWithTopic("greeting"); ```Java PronunciationAssessmentConfig pronunciationConfig = new PronunciationAssessmentConfig("", -PronunciationAssessmentGradingSystem.HundredMark, PronunciationAssessmentGranularity.Phoneme, false); + PronunciationAssessmentGradingSystem.HundredMark, PronunciationAssessmentGranularity.Phoneme, false); pronunciationConfig.enableProsodyAssessment(); -pronunciationConfig.enableContentAssessmentWithTopic("greeting"); +pronunciationConfig.enableContentAssessmentWithTopic("greeting"); ``` ::: zone-end pronunciationConfig.enableContentAssessmentWithTopic("greeting"); ```Python pronunciation_config = speechsdk.PronunciationAssessmentConfig( -reference_text="", -grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark, -granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme, -enable_miscue=False) + reference_text="", + grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark, + granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme, + enable_miscue=False) pronunciation_config.enable_prosody_assessment() -pronunciation_config.enable_content_assessment_with_topic("greeting") +pronunciation_config.enable_content_assessment_with_topic("greeting") ``` ::: zone-end pronunciation_config.enable_content_assessment_with_topic("greeting") ```JavaScript var pronunciationAssessmentConfig = new sdk.PronunciationAssessmentConfig( -referenceText: "", -gradingSystem: sdk.PronunciationAssessmentGradingSystem.HundredMark, -granularity: sdk.PronunciationAssessmentGranularity.Phoneme, -enableMiscue: false); + referenceText: "", + gradingSystem: sdk.PronunciationAssessmentGradingSystem.HundredMark, + granularity: sdk.PronunciationAssessmentGranularity.Phoneme, + enableMiscue: false); pronunciationAssessmentConfig.EnableProsodyAssessment(); -pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting"); +pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting"); ``` ::: zone-end pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting"); ```ObjectiveC SPXPronunciationAssessmentConfiguration *pronunicationConfig = -[[SPXPronunciationAssessmentConfiguration alloc] init:@"" - gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark - granularity:SPXPronunciationAssessmentGranularity_Phoneme - enableMiscue:false]; +[[SPXPronunciationAssessmentConfiguration alloc] init:@"" gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark granularity:SPXPronunciationAssessmentGranularity_Phoneme enableMiscue:false]; [pronunicationConfig enableProsodyAssessment]; [pronunicationConfig enableContentAssessmentWithTopic:@"greeting"]; ``` SPXPronunciationAssessmentConfiguration *pronunicationConfig = ```swift let pronAssessmentConfig = try! SPXPronunciationAssessmentConfiguration("", -gradingSystem: .hundredMark, -granularity: .phoneme, -enableMiscue: false) + gradingSystem: .hundredMark, + granularity: .phoneme, + enableMiscue: false) pronAssessmentConfig.enableProsodyAssessment() -pronAssessmentConfig.enableContentAssessment(withTopic: "greeting") +pronAssessmentConfig.enableContentAssessment(withTopic: "greeting") ``` ::: zone-end pronAssessmentConfig.enableContentAssessment(withTopic: "greeting") ::: zone-end - This table lists some of the key configuration parameters for pronunciation assessment. | Parameter | Description | This table lists some of the key configuration parameters for pronunciation asse | `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. | | `ScenarioId` | A GUID indicating a customized point system. | -## Syllable groups --Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme. +## Get pronunciation assessment results -The following table compares example phonemes with the corresponding syllables. +When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string. -| Sample word | Phonemes | Syllables | -|--|-|-| -|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl| -|hello|hɛloʊ|hɛ·loʊ| -|luck|lʌk|lʌk| -|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs| -To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`. +```csharp +using (var speechRecognizer = new SpeechRecognizer( + speechConfig, + audioConfig)) +{ + pronunciationAssessmentConfig.ApplyTo(speechRecognizer); + var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync(); -## Phoneme alphabet format + // The pronunciation assessment result as a Speech SDK object + var pronunciationAssessmentResult = + PronunciationAssessmentResult.FromResult(speechRecognitionResult); -For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score. + // The pronunciation assessment result as a JSON string + var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult); +} +``` -The following table compares example SAPI phonemes with the corresponding IPA phonemes. -| Sample word | SAPI Phonemes | IPA phonemes | -|--|-|-| -|hello|h eh l ow|h ɛ l oʊ| -|luck|l ah k|l ʌ k| -|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s| -To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default. +Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string. +```cpp +auto speechRecognizer = SpeechRecognizer::FromConfig( + speechConfig, + audioConfig); -```csharp -pronunciationAssessmentConfig.PhonemeAlphabet = "IPA"; -``` - +pronunciationAssessmentConfig->ApplyTo(speechRecognizer); +speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get(); +// The pronunciation assessment result as a Speech SDK object +auto pronunciationAssessmentResult = + PronunciationAssessmentResult::FromResult(speechRecognitionResult); -```cpp -auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); +// The pronunciation assessment result as a JSON string +auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult); ```++To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624). ::: zone-end ::: zone pivot="programming-language-java"+For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string. ```Java-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); -``` +SpeechRecognizer speechRecognizer = new SpeechRecognizer( + speechConfig, + audioConfig); +pronunciationAssessmentConfig.applyTo(speechRecognizer); +Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync(); +SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS); +// The pronunciation assessment result as a Speech SDK object +PronunciationAssessmentResult pronunciationAssessmentResult = + PronunciationAssessmentResult.fromResult(speechRecognitionResult); -```Python -pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}") +// The pronunciation assessment result as a JSON string +String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult); ++recognizer.close(); +speechConfig.close(); +audioConfig.close(); +pronunciationAssessmentConfig.close(); +speechRecognitionResult.close(); ``` ::: zone pivot="programming-language-javascript" ```JavaScript-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); +var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig); ++pronunciationAssessmentConfig.applyTo(speechRecognizer); ++speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => { + // The pronunciation assessment result as a Speech SDK object + var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult); ++ // The pronunciation assessment result as a JSON string + var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult); +}, +{}); ``` +To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52). ++++```Python +speech_recognizer = speechsdk.SpeechRecognizer( + speech_config=speech_config, \ + audio_config=audio_config) ++pronunciation_assessment_config.apply_to(speech_recognizer) +speech_recognition_result = speech_recognizer.recognize_once() ++# The pronunciation assessment result as a Speech SDK object +pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result) ++# The pronunciation assessment result as a JSON string +pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult) +``` ++To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1). + ::: zone pivot="programming-language-objectivec" ```ObjectiveC-pronunciationAssessmentConfig.phonemeAlphabet = @"IPA"; +SPXSpeechRecognizer* speechRecognizer = \ + [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig + audioConfiguration:audioConfig]; ++[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer]; ++SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce]; ++// The pronunciation assessment result as a Speech SDK object +SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult]; ++// The pronunciation assessment result as a JSON string +NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult]; ``` +To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862). ::: zone pivot="programming-language-swift" ```swift-pronunciationAssessmentConfig?.phonemeAlphabet = "IPA" +let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig) ++try! pronConfig.apply(to: speechRecognizer) ++let speechRecognitionResult = try? speechRecognizer.recognizeOnce() ++// The pronunciation assessment result as a Speech SDK object +let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!) ++// The pronunciation assessment result as a JSON string +let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult) ``` ::: zone-end pronunciationAssessmentConfig?.phonemeAlphabet = "IPA" ::: zone-end +### Result parameters -## Spoken phoneme +Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario, and unscripted assessment is for the speaking language learning scenario. -With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes. +> [!NOTE] +> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing). -For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. +#### Scripted assessment results ++This table lists some of the key pronunciation assessment results for the scripted assessment (reading scenario) and the supported granularity for each. ++| Parameter | Description |Granularity| +|--|-|-| +| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level| +| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |Full Text level| +| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |Full Text level| +| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level| +| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level| +| `ErrorType` | This value indicates whether a word is omitted, inserted, improperly inserted with a break, or missing a break at punctuation compared to the reference text. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level| ++#### Unscripted assessment results ++This table lists some of the key pronunciation assessment results for the unscripted assessment (speaking scenario) and the supported granularity for each. ++> [!NOTE] +> VocabularyScore, GrammarScore, and TopicScore parameters roll up to the combined content assessment. +> +> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale. ++| Response parameter | Description |Granularity| +|--|-|-| +| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level| +| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level| +| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level| +| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level| +| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Grammatical errors are jointly evaluated by lexical accuracy, grammatical accuracy, and diversity of sentence structures. | Full Text level| +| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speaker’s ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level| +| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from AccuracyScore, FluencyScore, and CompletenessScore with weight. | Full Text level| +| `ErrorType` | This value indicates whether a word is badly pronounced, improperly inserted with a break, missing a break at punctuation, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level| ++The following table describes the prosody assessment results in more detail: ++| Field | Description | +|-|--| +| `ProsodyScore` | Prosody score of the entire utterance. | +| `Feedback` | Feedback on the word level, including Break and Intonation. | +|`Break` | | +| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. In the current version, we don’t provide the break error type. You need to set thresholds on the following fields “UnexpectedBreak – Confidence” and “MissingBreak – confidence”, respectively to decide whether there's an unexpected break or missing break before the word. | +| `UnexpectedBreak` | Indicates an unexpected break before the word. | +| `MissingBreak` | Indicates a missing break before the word. | +| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of ‘UnexpectedBreak – Confidence’ is larger than 0.75, it can be decided to have an unexpected break. If the value of ‘MissingBreak – confidence’ is larger than 0.75, it can be decided to have a missing break. If you want to have variable detection sensitivity on these two breaks, it’s suggested to assign different thresholds to the 'UnexpectedBreak - Confidence' and 'MissingBreak - Confidence' fields. | +|`Intonation`| Indicates intonation in speech. | +| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the ‘Monotone’ exists in the field ‘ErrorTypes’, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. | +| `Monotone` | Indicates monotonic speech. | +| `Thresholds (Monotone Confidence)` | The fields 'Monotone - SyllablePitchDeltaConfidence' are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, you can adjust the thresholds on these fields to customize the detection according to your preferences. | ++### JSON result example ++The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know: +- The phoneme [alphabet](#phoneme-alphabet-format) is IPA. +- The [syllables](#syllable-groups) are returned alongside phonemes for the same word. +- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties). +- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested. +- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. ```json { For example, to obtain the complete spoken sound for the word "Hello", you can c ] } ]-} -``` --To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`. - --```csharp -pronunciationAssessmentConfig.NBestPhonemeCount = 5; -``` - ---```cpp -auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); -``` - ---```Java -PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); -``` - ---```Python -pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}") -``` ----```JavaScript -var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); -``` --- - -```ObjectiveC -pronunciationAssessmentConfig.nbestPhonemeCount = 5; -``` -----```swift -pronunciationAssessmentConfig?.nbestPhonemeCount = 5 -``` -----## Get pronunciation assessment results --In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified. --> [!TIP] -> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario. --When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string. ---```csharp -using (var speechRecognizer = new SpeechRecognizer( - speechConfig, - audioConfig)) -{ - pronunciationAssessmentConfig.ApplyTo(speechRecognizer); - var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync(); -- // The pronunciation assessment result as a Speech SDK object - var pronunciationAssessmentResult = - PronunciationAssessmentResult.FromResult(speechRecognitionResult); -- // The pronunciation assessment result as a JSON string - var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult); -} -``` --To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98). --+} +``` -Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string. +You can get pronunciation assessment scores for: -```cpp -auto speechRecognizer = SpeechRecognizer::FromConfig( - speechConfig, - audioConfig); +- Full text +- Words +- Syllable groups +- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format -pronunciationAssessmentConfig->ApplyTo(speechRecognizer); -speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get(); +> [!NOTE] +> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service). -// The pronunciation assessment result as a Speech SDK object -auto pronunciationAssessmentResult = - PronunciationAssessmentResult::FromResult(speechRecognitionResult); +## Syllable groups -// The pronunciation assessment result as a JSON string -auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult); -``` +Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme. -To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624). - +The following table compares example phonemes with the corresponding syllables. -For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string. +| Sample word | Phonemes | Syllables | +|--|-|-| +|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl| +|hello|hɛloʊ|hɛ·loʊ| +|luck|lʌk|lʌk| +|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs| -```Java -SpeechRecognizer speechRecognizer = new SpeechRecognizer( - speechConfig, - audioConfig); +To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`. -pronunciationAssessmentConfig.applyTo(speechRecognizer); -Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync(); -SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS); +## Phoneme alphabet format -// The pronunciation assessment result as a Speech SDK object -PronunciationAssessmentResult pronunciationAssessmentResult = - PronunciationAssessmentResult.fromResult(speechRecognitionResult); +For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score. -// The pronunciation assessment result as a JSON string -String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult); +The following table compares example SAPI phonemes with the corresponding IPA phonemes. -recognizer.close(); -speechConfig.close(); -audioConfig.close(); -pronunciationAssessmentConfig.close(); -speechRecognitionResult.close(); -``` +| Sample word | SAPI Phonemes | IPA phonemes | +|--|-|-| +|hello|h eh l ow|h ɛ l oʊ| +|luck|l ah k|l ʌ k| +|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s| +To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default. +```csharp +pronunciationAssessmentConfig.PhonemeAlphabet = "IPA"; +``` + -```JavaScript -var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig); -pronunciationAssessmentConfig.applyTo(speechRecognizer); +```cpp +auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); +``` + -speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => { - // The pronunciation assessment result as a Speech SDK object - var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult); - // The pronunciation assessment result as a JSON string - var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult); -}, -{}); +```Java +PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); ``` -To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52). - ::: zone pivot="programming-language-python" ```Python-speech_recognizer = speechsdk.SpeechRecognizer( - speech_config=speech_config, \ - audio_config=audio_config) +pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}") +``` -pronunciation_assessment_config.apply_to(speech_recognizer) -speech_recognition_result = speech_recognizer.recognize_once() -# The pronunciation assessment result as a Speech SDK object -pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result) -# The pronunciation assessment result as a JSON string -pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult) +```JavaScript +var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); ``` -To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1). - ::: zone pivot="programming-language-objectivec" ```ObjectiveC-SPXSpeechRecognizer* speechRecognizer = \ - [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig - audioConfiguration:audioConfig]; --[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer]; --SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce]; --// The pronunciation assessment result as a Speech SDK object -SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult]; --// The pronunciation assessment result as a JSON string -NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult]; +pronunciationAssessmentConfig.phonemeAlphabet = @"IPA"; ``` -To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862). ::: zone pivot="programming-language-swift" ```swift-let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig) --try! pronConfig.apply(to: speechRecognizer) --let speechRecognitionResult = try? speechRecognizer.recognizeOnce() --// The pronunciation assessment result as a Speech SDK object -let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!) --// The pronunciation assessment result as a JSON string -let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult) +pronunciationAssessmentConfig?.phonemeAlphabet = "IPA" ``` -To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L224). - ::: zone-end ::: zone pivot="programming-language-go" ::: zone-end -### Result parameters --Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario, and unscripted assessment is for the speaking language learning scenario. --> [!NOTE] -> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing). --#### Scripted assessment results --This table lists some of the key pronunciation assessment results for the scripted assessment (reading scenario) and the supported granularity for each. --| Parameter | Description |Granularity| -|--|-|-| -| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level| -| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |Full Text level| -| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |Full Text level| -| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level| -| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level| -| `ErrorType` | This value indicates whether a word is omitted, inserted, improperly inserted with a break, or missing a break at punctuation compared to the reference text. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level| --#### Unscripted assessment results --This table lists some of the key pronunciation assessment results for the unscripted assessment (speaking scenario) and the supported granularity for each. --> [!NOTE] -> VocabularyScore, GrammarScore, and TopicScore parameters roll up to the combined content assessment. -> -> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale. --| Response parameter | Description |Granularity| -|--|-|-| -| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level| -| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level| -| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level| -| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level| -| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Grammatical errors are jointly evaluated by lexical accuracy, grammatical accuracy, and diversity of sentence structures. | Full Text level| -| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speaker’s ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level| -| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from AccuracyScore, FluencyScore, and CompletenessScore with weight. | Full Text level| -| `ErrorType` | This value indicates whether a word is badly pronounced, improperly inserted with a break, missing a break at punctuation, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level| --The following table describes the prosody assessment results in more detail: -| Field | Description | -|-|--| -| `ProsodyScore` | Prosody score of the entire utterance. | -| `Feedback` | Feedback on the word level, including Break and Intonation. | -|`Break` | | -| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. In the current version, we don’t provide the break error type. You need to set thresholds on the following fields “UnexpectedBreak – Confidence” and “MissingBreak – confidence”, respectively to decide whether there's an unexpected break or missing break before the word. | -| `UnexpectedBreak` | Indicates an unexpected break before the word. | -| `MissingBreak` | Indicates a missing break before the word. | -| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of ‘UnexpectedBreak – Confidence’ is larger than 0.75, it can be decided to have an unexpected break. If the value of ‘MissingBreak – confidence’ is larger than 0.75, it can be decided to have a missing break. If you want to have variable detection sensitivity on these two breaks, it’s suggested to assign different thresholds to the 'UnexpectedBreak - Confidence' and 'MissingBreak - Confidence' fields. | -|`Intonation`| Indicates intonation in speech. | -| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the ‘Monotone’ exists in the field ‘ErrorTypes’, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. | -| `Monotone` | Indicates monotonic speech. | -| `Thresholds (Monotone Confidence)` | The fields 'Monotone - SyllablePitchDeltaConfidence' are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, you can adjust the thresholds on these fields to customize the detection according to your preferences. | +## Spoken phoneme -### JSON result example +With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes. -The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know: -- The phoneme [alphabet](#phoneme-alphabet-format) is IPA.-- The [syllables](#syllable-groups) are returned alongside phonemes for the same word. -- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).-- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.-- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. +For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. ```json { The [scripted](#scripted-assessment-results) pronunciation assessment results fo } ``` -## Pronunciation assessment in streaming mode --Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore`, `ProsodyScore`, and `CompletenessScore` will vary over time throughout the recording and evaluation process. -+To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`. + ::: zone pivot="programming-language-csharp" -For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream). -+```csharp +pronunciationAssessmentConfig.NBestPhonemeCount = 5; +``` + ::: zone pivot="programming-language-cpp" -For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream). -+```cpp +auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); +``` + ::: zone-end ::: zone pivot="programming-language-java" -For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548). -+```Java +PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); +``` + ::: zone pivot="programming-language-python" -For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915). +```Python +pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}") +``` ::: zone-end ::: zone pivot="programming-language-javascript" -For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js). +```JavaScript +var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); +``` ::: zone-end ::: zone pivot="programming-language-objectivec"--For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831). + + +```ObjectiveC +pronunciationAssessmentConfig.nbestPhonemeCount = 5; +``` ::: zone-end + ::: zone pivot="programming-language-swift" -For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191). +```swift +pronunciationAssessmentConfig?.nbestPhonemeCount = 5 +``` ::: zone-end For how to use Pronunciation Assessment in streaming mode in your own applicatio ## Next steps -- Learn our quality [benchmark](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866)+- Learn our quality [benchmark](https://aka.ms/pronunciationassessment/techblog) - Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)-- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.+- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video demo](https://www.youtube.com/watch?v=NQi4mBiNNTE) of pronunciation assessment. |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md | To improve Speech to text recognition accuracy, customization is available for s The table in this section summarizes the locales and voices supported for Text to speech. See the table footnotes for more details. -Additional remarks for Text to speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below. +Additional remarks for text to speech locales are included in the [voice styles and roles](#voice-styles-and-roles), [prebuilt neural voices](#prebuilt-neural-voices), [Custom Neural Voice](#custom-neural-voice), and [personal voice](#personal-voice) sections below. > [!TIP] > Check the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. With the cross-lingual feature, you can transfer your custom neural voice model [!INCLUDE [Language support include](includes/language-support/tts-cnv.md)] ++### Personal voice ++[Personal voice](personal-voice-overview.md) is a feature that lets you create a voice that sounds like you or your users. The following table summarizes the locales supported for personal voice. +++ # [Pronunciation assessment](#tab/pronunciation-assessment) The table in this section summarizes the 24 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 23 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. |
ai-services | Personal Voice Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-overview.md | -With personal voice (preview), you can get AI generated replication of your voice (or users of your application) in a few seconds. You provide a one-minute speech sample as the audio prompt, and then use it to generate speech in any of the more than 90 languages supported across more than locales. +With personal voice (preview), you can get AI generated replication of your voice (or users of your application) in a few seconds. You provide a one-minute speech sample as the audio prompt, and then use it to generate speech in any of the more than 90 languages supported across more than 100 locales. > [!NOTE] > Personal voice is available in these regions: West Europe, East US, and South East Asia. +> For supported locales, see [personal voice language support](./language-support.md#personal-voice). The following table summarizes the difference between custom neural voice pro and personal voice. Here's example SSML in a request for text to speech with the voice name and the </speak> ``` +### Responsible AI ++We care about the people who use AI and the people who will be affected by it as much as we care about technology. For more information, see the Responsible AI [transparency notes](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context). + ## Reference documentation The API reference documentation is made available to approved customers. You can apply for access [here](https://aka.ms/customneural). |
ai-services | Power Automate Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/power-automate-batch-transcription.md | To trigger the test flow, upload an audio file to the Azure Blob Storage contain ## Upload files to the container -Follow these steps to upload [wav, mp3, or ogg](batch-transcription-audio-data.md#supported-audio-formats) files from your local directory to the Azure Storage container that you [created previously](#create-the-azure-blob-storage-container). +Follow these steps to upload [wav, mp3, or ogg](batch-transcription-audio-data.md#supported-audio-formats-and-codecs) files from your local directory to the Azure Storage container that you [created previously](#create-the-azure-blob-storage-container). 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. 1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource. |
ai-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pronunciation-assessment-tool.md | At the bottom of the Assessment result, two overall scores are displayed: Pronun - **Prosody score**: Assesses the use of appropriate intonation, rhythm, and stress. Several additional error types related to prosody assessment are introduced, such as Unexpected break, Missing break, and Monotone. These error types provide more detailed information about pronunciation errors compared to the previous engine. **Content Score**: This score provides an aggregated assessment of the content of the speech and includes three sub-aspects. This score is only available in the speaking tab for an unscripted assessment.++> [!NOTE] +> Content score is currently available on the following regions: `westcentralus`, `eastasia`, `eastus`, `northeurope`, `westeurope`, and `westus2`. All other regions will have Content score available starting from Nov 30, 2023. + - **Vocabulary score**: Evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, as well as the level of lexical complexity. - **Grammar score**: Evaluates the correctness of grammar usage and variety of sentence patterns. It considers lexical accuracy, grammatical accuracy, and diversity of sentence structures, providing a more comprehensive evaluation of language proficiency. - **Topic score**: Assesses the level of understanding and engagement with the topic discussed in the speech. It evaluates the speaker's ability to effectively express thoughts and ideas related to the given topic. |
ai-services | Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/sovereign-clouds.md | Available to US government entities and their partners only. See more informatio - Neural voice - Speech translation - **Unsupported features:**- - Custom Voice - - Custom Commands + - Custom commands + - Custom neural voice + - Personal voice + - Text to speech avatar - **Supported languages:** - See the list of supported languages [here](language-support.md) |
ai-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md | You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the |--|--|--| | Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). | | Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |+| Max audio length for [real-time diarization](./get-started-stt-diarization.md). | N/A | 240 minutes per file | #### Batch transcription You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the | Max audio input file size | N/A | 1 GB | | Max number of blobs per container | N/A | 10000 | | Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |-| Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file | +| Max audio length for transcriptions with diarization enabled. | N/A | 240 minutes per file | #### Model customization |
ai-services | Custom Avatar Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/custom-avatar-create.md | An avatar talent is an individual or target actor whose video of speaking is rec You must provide a video file with a recorded statement from your avatar talent, acknowledging the use of their image and voice. Microsoft verifies that the content in the recording matches the predefined script provided by Microsoft. Microsoft compares the face of the avatar talent in the recorded video statement file with randomized videos from the training datasets to ensure that the avatar talent in video recordings and the avatar talent in the statement video file are from the same person. -You can find the verbal consent statement in multiple languages on GitHub. The language of the verbal statement must be the same as your recording. See also the disclosure for voice talent. +You can find the verbal consent statement in multiple languages on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/sampledata/customavatar/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. See also the disclosure for voice talent. ## Prepare training data for custom text to speech avatar |
ai-services | Real Time Synthesis Avatar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md | In this how-to guide, you learn how to use text to speech avatar (preview) with To get started, make sure you have the following prerequisites: -- **Azure Subscription:** [Create one for free](https://azure.microsoft.com/free/cognitive-services).-- **Speech Resource:** <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Create a speech resource</a> in the Azure portal.-- **Communication Resource:** Create a [Communication resource](https://portal.azure.com/#create/Microsoft.Communication) in the Azure portal (for real-time avatar synthesis only).-- You also need your network relay token for real-time avatar synthesis. After deploying your Communication resource, select **Go to resource** to view the endpoint and connection string under **Settings** -> **Keys** tab, and then follow [Access TURN relays](/azure/ai-services/speech-service/quickstarts/setup-platform#install-the-speech-sdk-for-javascript) to generate the relay token with the endpoint and connection string filled.+- **Azure subscription:** [Create one for free](https://azure.microsoft.com/free/cognitive-services). +- **Speech resource:** <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Create a speech resource</a> in the Azure portal. Select "Standard S0" pricing tier if you want to create speech resource to access avatar. +- **Your speech resource key and region:** After your Speech resource is deployed, select **Go to resource** to view and manage keys. For more information about Azure AI services resources, see [Get the keys for your resource](/azure/ai-services/multi-service-resource?pivots=azportal&tabs=windows#get-the-keys-for-your-resource). +- If you build an application of real time avatar: + - **Communication resource:** Create a [Communication resource](https://portal.azure.com/#create/Microsoft.Communication) in the Azure portal (for real-time avatar synthesis only). + - You also need your network relay token for real-time avatar synthesis. After deploying your Communication resource, select **Go to resource** to view the endpoint and connection string under **Settings** -> **Keys** tab, and then follow [Access TURN relays](/azure/ai-services/speech-service/quickstarts/setup-platform#install-the-speech-sdk-for-javascript) to generate the relay token with the endpoint and connection string filled. ## Set up environment |
ai-services | What Is Text To Speech Avatar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md | keywords: text to speech avatar Text to speech avatar converts text into a digital video of a photorealistic human (either a prebuilt avatar or a [custom text to speech avatar](#custom-text-to-speech-avatar)) speaking with a natural-sounding voice. The text to speech avatar video can be synthesized asynchronously or in real time. Developers can build applications integrated with text to speech avatar through an API, or use a content creation tool on Speech Studio to create video content without coding. -With text to speech avatar's advanced neural network models, the feature empowers users to deliver life-like and high-quality synthetic talking avatar videos for various applications while adhering to responsible AI practices. +With text to speech avatar's advanced neural network models, the feature empowers users to deliver life-like and high-quality synthetic talking avatar videos for various applications while adhering to [responsible AI practices](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context). > [!NOTE] > The text to speech avatar feature is only available in the following service regions: West US 2, West Europe, and Southeast Asia. The text to speech avatar feature is only available in the following service reg ### Responsible AI -We care about the people who use AI and the people who will be affected by it as much as we care about technology. For more information, see the Responsible AI [transparency notes](https://aka.ms/TTS-TN). +We care about the people who use AI and the people who will be affected by it as much as we care about technology. For more information, see the Responsible AI [transparency notes](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context) and [disclosure for voice and avatar talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context). ## Next steps |
ai-studio | Content Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md | Select one of the following tabs to get started with content safety in Azure AI Azure AI Studio provides a capability for you to quickly try out text moderation. The *moderate text content* tool takes into account various factors such as the type of content, the platform's policies, and the potential effect on users. Run moderation tests on sample content. Use Configure filters to rerun and further fine tune the test results. Add specific terms to the blocklist that you want detect and act on. -1. Sign in to [Azure AI Studio](https://aka.ms/aistudio) and select **Explore** from the top menu. +1. Sign in to [Azure AI Studio](https://ai.azure.com) and select **Explore** from the top menu. 1. Select **Content safety** panel under **Responsible AI**. 1. Select **Try it out** in the **Moderate text content** panel. The **Use blocklist** tab lets you create, edit, and add a blocklist to the mode Azure AI Studio provides a capability for you to quickly try out image moderation. The *moderate image content* tool takes into account various factors such as the type of content, the platform's policies, and the potential effect on users. Run moderation tests on sample content. Use Configure filters to rerun and further fine tune the test results. Add specific terms to the blocklist that you want detect and act on. -1. Sign in to [Azure AI Studio](https://aka.ms/aistudio) and select **Explore** from the top menu. +1. Sign in to [Azure AI Studio](https://ai.azure.com) and select **Explore** from the top menu. 1. Select **Content safety** panel under **Responsible AI**. 1. Select **Try it out** in the **Moderate image content** panel. |
ai-studio | Hear Speak Playground | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md | The speech to text and text to speech features can be used together or separatel Before you can start a chat session, you need to configure the playground to use the speech to text and text to speech features. -1. Sign in to [Azure AI Studio](https://aka.ms/aistudio). +1. Sign in to [Azure AI Studio](https://ai.azure.com). 1. Select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. Make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed chat model from the **Deployment** dropdown. |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | The steps in this tutorial are: Follow these steps to deploy a chat model and test it without your data. -1. Sign in to [Azure AI Studio](https://aka.ms/aistudio) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page. +1. Sign in to [Azure AI Studio](https://ai.azure.com) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page. 1. Select **Build** from the top menu and then select **Deployments** > **Create**. :::image type="content" source="../media/tutorials/chat-web-app/deploy-create.png" alt-text="Screenshot of the deployments page without deployments." lightbox="../media/tutorials/chat-web-app/deploy-create.png"::: |
ai-studio | What Is Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md | As a developer, you can manage settings such as connections and compute. Your ad +## Azure AI studio enterprise chat solution demo ++Learn how to create a retail copilot using your data with Azure AI Studio in this [end-to-end walkthrough video](https://youtu.be/Qes7p5w8Tz8). +> [!VIDEO https://www.youtube.com/embed/Qes7p5w8Tz8] + ## Pricing and Billing Using Azure AI Studio also incurs cost associated with the underlying services, to learn more read [Plan and manage costs for Azure AI services](./how-to/costs-plan-manage.md). ## Region availability -Azure AI Studio is currently available in all regions where Azure OpenAI Service is available. To learn more, see [Azure global infrastructure - Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services). +Azure AI Studio is currently available in the following regions: Australia East, Brazil South, Canada Central, East US, East US 2, France Central, Germany West Central, India South, Japan East, North Central US, Norway East, Poland Central, South Africa North, South Central US, Sweden Central, Switzerland North, UK South, West Europe, and West US. ++To learn more, see [Azure global infrastructure - Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services). ## How to get access You can explore Azure AI Studio without signing in, but for full functionality a ## Next steps -- [Create a project in Azure AI Studio](./how-to/create-projects.md)-- [Quickstart: Generate product name ideas in the Azure AI Studio playground](quickstarts/playground-completions.md)+- [Create an AI Studio project](./how-to/create-projects.md) +- [Tutorial: Deploy a chat web app](tutorials/deploy-chat-web-app.md) - [Tutorial: Using Azure AI Studio with a screen reader](tutorials/screen-reader.md) |
aks | Artifact Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md | + + Title: Reduce image pull time with Artifact Streaming on Azure Kubernetes Service (AKS) (Preview) +description: Learn how to enable Artifact Streaming on Azure Kubernetes Service (AKS) to reduce image pull time. +++++ Last updated : 11/16/2023+++# Reduce image pull time with Artifact Streaming on Azure Kubernetes Service (AKS) (Preview) ++High performance compute workloads often involve large images, which can cause long image pull times and slow down your workload deployments. Artifact Streaming on AKS allows you to stream container images from Azure Container Registry (ACR) to AKS. AKS only pulls the necessary layers for initial pod startup, reducing the time it takes to pull images and deploy your workloads. ++Artifact Streaming can reduce time to pod readiness by over 15%, depending on the size of the image, and it works best for images <30GB. Based on our testing, we saw reductions in pod start-up times for images <10GB from minutes to seconds. If you have a pod that needs access to a large file (>30GB), then you should mount it as a volume instead of building it as a layer. This is because if your pod requires that file to start, it congests the node. Artifact Streaming isn't ideal for read heavy images from your filesystem if you need that on startup. With Artifact Streaming, pod start-up becomes concurrent, whereas without it, pods start in serial. ++This article describes how to enable the Artifact Streaming feature on your AKS node pools to stream artifacts from ACR. +++## Prerequisites ++* You need an existing AKS cluster with ACR integration. If you don't have one, you can create one using [Authenticate with ACR from AKS][acr-auth-aks]. +* [Enable Artifact Streaming on ACR][enable-artifact-streaming-acr]. +* This feature requires Kubernetes version 1.25 or later. To check your AKS cluster version, see [Check for available AKS cluster upgrades][aks-upgrade]. ++> [!NOTE] +> Artifact Streaming is only supported on Ubuntu 22.04, Ubuntu 20.04, and Azure Linux node pools. Windows node pools aren't supported. ++## Install the `aks-preview` CLI extension ++1. Install the `aks-preview` CLI extension using the [`az extension add`][az-extension-add] command. ++ ```azurecli-interactive + az extension add --name aks-preview + ``` ++2. Update the extension to ensure you have the latest version installed using the [`az extension update`][az-extension-update] command. ++ ```azurecli-interactive + az extension update --name aks-preview + ``` ++## Register the `ArtifactStreamingPreview` feature flag in your subscription ++* Register the `ArtifactStreamingPreview` feature flag in your subscription using the [`az feature register`][az-feature-register] command. ++ ```azurecli-interactive + az feature register --namespace Microsoft.ContainerService --name ArtifactStreamingPreview + ``` ++## Enable Artifact Streaming on ACR ++Enablement on ACR is a prerequisite for Artifact Streaming on AKS. For more information, see [Artifact Streaming on ACR](https://aka.ms/acr/artifact-streaming). ++1. Create an Azure resource group to hold your ACR instance using the [`az group create`][az-group-create] command. ++ ```azurecli-interactive + az group create --name myStreamingTest --location westus + ``` ++2. Create a new premium SKU Azure Container Registry using the [`az acr create`][az-acr-create] command with the `--sku Premium` flag. ++ ```azurecli-interactive + az acr create --resource-group myStreamingTest --name mystreamingtest --sku Premium + ``` ++3. Configure the default ACR instance for your subscription using the [`az configure`][az-configure] command. ++ ```azurecli-interactive + az configure --defaults acr="mystreamingtest" + ``` ++4. Push or import an image to the registry using the [`az acr import`][az-acr-import] command. ++ ```azurecli-interactive + az acr import -source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest + ``` ++5. Create a streaming artifact from the image using the [`az acr artifact-streaming create`][az-acr-artifact-streaming-create] command. ++ ```azurecli-interactive + az acr artifact-streaming create --image jupyter/all-spark-notebook:latest + ``` ++6. Verify the generated Artifact Streaming using the [`az acr manifest list-referrers`][az-acr-manifest-list-referrers] command. ++ ```azurecli-interactive + az acr manifest list-referrers -n jupyter/all-spark-notebook:latest + ``` ++## Enable Artifact Streaming on AKS ++### Enable Artifact Streaming on a new node pool ++* Create a new node pool with Artifact Streaming enabled using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-artifact-streaming` flag set to `true`. ++ ```azurecli-interactive + az aks nodepool add \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name myNodePool \ + --enable-artifact-streaming true + ``` ++### Enable Artifact Streaming on an existing node pool ++* Enable Artifact Streaming on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--enable-artifact-streaming` flag. ++ ```azurecli-interactive + az aks nodepool update \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name myNodePool \ + --enable-artifact-streaming + ``` ++## Check if Artifact Streaming is enabled ++Now that you enabled Artifact Streaming on a premium ACR and connected that to an AKS node pool with Artifact Streaming enabled, any new pod deployments on this cluster with an image pull from the ACR with Artifact Streaming enabled will see reductions in image pull times. ++* Check if your node pool has Artifact Streaming enabled using the [`az aks nodepool show`][az-aks-nodepool-show] command. ++ ```azurecli-interactive + az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool grep ArtifactStreamingConfig + ``` ++ In the output, check that the `Enabled` field is set to `true`. ++## Disable Artifact Streaming on AKS ++You can disable Artifact Streaming at the node pool level. The change takes effect on the next node pool upgrade. ++> [!NOTE] +> Artifact Streaming requires connection to and enablement on an ACR. If you disconnect or disable from ACR, Artifact Streaming is automatically disabled on the node pool. If you don't disable Artifact Streaming at the node pool level, it begins working immediately once you resume the connection to and enablement on ACR. ++### Disable Artifact Streaming on an existing node pool ++* Disable Artifact Streaming on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--disable-artifact-streaming` flag. ++ ```azurecli-interactive + az aks nodepool update \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name myNodePool \ + --disable-artifact-streaming + ``` ++## Next steps ++This article described how to enable Artifact Streaming on your AKS node pools to stream artifacts from ACR and reduce image pull time. To learn more about working with container images in AKS, see [Best practices for container image management and security in AKS][aks-image-management]. ++<!-- LINKS --> +[enable-artifact-streaming-acr]: #enable-artifact-streaming-on-acr +[acr-auth-aks]: ./cluster-container-registry-integration.md +[aks-upgrade]: ./upgrade-cluster.md +[az-extension-add]: /cli/azure/extension#az-extension-add +[az-extension-update]: /cli/azure/extension#az-extension-update +[az-feature-register]: /cli/azure/feature#az-feature-register +[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add +[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update +[aks-image-management]: ./operator-best-practices-container-image-management.md +[az-group-create]: /cli/azure/group#az-group-create +[az-acr-create]: /cli/azure/acr#az-acr-create +[az-configure]: /cli/azure#az_configure +[az-acr-import]: /cli/azure/acr#az-acr-import +[az-acr-artifact-streaming-create]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-create +[az-acr-manifest-list-referrers]: /cli/azure/acr/manifest#az-acr-manifest-list-referrers +[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az-aks-nodepool-show |
aks | Confidential Containers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/confidential-containers-overview.md | Last updated 11/13/2023 # Confidential Containers (preview) with Azure Kubernetes Service (AKS) -Confidential containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security, data privacy and runtime code integrity goals. Azure Kubernetes Service (AKS) includes Confidential Containers (preview) on AKS. +Confidential Containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security, data privacy and runtime code integrity goals. Azure Kubernetes Service (AKS) includes Confidential Containers (preview) on AKS. Confidential Containers builds on Kata Confidential Containers and hardware-based encryption to encrypt container memory. It establishes a new level of data confidentiality by preventing data in memory during computation from being in clear text, readable format. Trust is earned in the container through hardware attestation, allowing access to the encrypted data by trusted entities. |
aks | Gpu Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md | Last updated 04/10/2023 # Use GPUs for compute-intensive workloads on Azure Kubernetes Service (AKS) -Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported with AKS. +Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters. +## Supported GPU-enabled VMs +To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported on AKS. + > [!NOTE] > GPU-enabled VMs contain specialized hardware subject to higher pricing and region availability. For more information, see the [pricing][azure-pricing] tool and [region availability][azure-availability]. +## Limitations +* AKS does not support Windows GPU-enabled node pools. +* If you're using an Azure Linux GPU-enabled node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md). +* [NVadsA10](https://learn.microsoft.com/azure/virtual-machines/nva10v5-series) v5-series are not a recommended SKU for GPU VHD. + ## Before you begin * This article assumes you have an existing AKS cluster. If you don't have a cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal]. * You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. -> [!NOTE] -> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md). - ## Get the credentials for your cluster * Get the credentials for your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group: This article helps you provision nodes with schedulable GPUs on new and existing az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` -## Add the NVIDIA device plugin +## Options for using NVIDIA GPUs -There are two ways to add the NVIDIA device plugin: +There are three ways to add the NVIDIA device plugin: 1. [Using the AKS GPU image](#update-your-cluster-to-use-the-aks-gpu-image-preview) 2. [Manually installing the NVIDIA device plugin](#manually-install-the-nvidia-device-plugin)+3. Using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) ++### Use NVIDIA GPU Operator with AKS +You can use the NVIDIA GPU Operator by skipping the gpu driver installation on AKS. For more information about using the NVIDIA GPU Operator with AKS, see [NVIDIA Documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html). ++Adding the node pool tag `SkipGPUDriverInstall=true` will skip installing the GPU driver automatically on newly created nodes in the node pool. Any existing nodes will not be changed - the pool can be scaled to 0 and back up to make the change take effect. You can specify the tag using the `--nodepool-tags` argument to [`az aks create`][az-aks-create] command (for a new cluster) or `--tags` with [`az aks nodepool add`][az-aks-nodepool-add] or [`az aks nodepool update`][az-aks-nodepool-update]. > [!WARNING] > We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image. ### Update your cluster to use the AKS GPU image (preview) -> [!NOTE] -> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md). - AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github]. [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] To see the GPU in action, you can schedule a GPU-enabled workload with the appro [nvidia-github]: https://github.com/NVIDIA/k8s-device-plugin <!-- LINKS - internal -->+[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md To see the GPU in action, you can schedule a GPU-enabled workload with the appro [az-feature-show]: /cli/azure/feature#az-feature-show [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update+[NVadsA10]: /azure/virtual-machines/nva10v5-series |
api-management | Credentials How To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-azure-ad.md | On the **Connection** tab, complete the steps for your connection to the provide <inbound> <base /> <get-authorization-context provider-id="MicrosoftEntraID-01" authorization-id="first-connection" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />- <set-header name="credential" exists-action="override"> - <value>@("Bearer " + ((credential)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value> + <set-header name="Authorization" exists-action="override"> + <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value> </set-header> </inbound> <backend> |
api-management | Developer Portal Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md | The call failure may also be caused by an TLS/SSL certificate, which is assigned | Microsoft Internet Explorer | No | | Mozilla Firefox | Yes<sup>1</sup> | - <small><sup>1</sup> Supported in the two latest production versions.</small> + <sup>1</sup> Supported in the two latest production versions. ## Local development of my self-hosted portal is no longer working |
api-management | Retry Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md | The `retry` policy executes its child policies once and then retries their execu | Attribute | Description | Required | Default | | - | -- | -- | - | | condition | Boolean. Specifies whether retries should be stopped (`false`) or continued (`true`). Policy expressions are allowed. | Yes | N/A |-| count | A positive number specifying the maximum number of retries to attempt. Policy expressions are allowed. | Yes | N/A | +| count | A positive number between 1 and 50 specifying the number of retries to attempt. Policy expressions are allowed. | Yes | N/A | | interval | A positive number in seconds specifying the wait interval between the retry attempts. Policy expressions are allowed. | Yes | N/A | | max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. Policy expressions are allowed. | No | N/A | | delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. Policy expressions are allowed. | No | N/A | In the following example, sending a request to a URL other than the defined back * [API Management advanced policies](api-management-advanced-policies.md) |
api-management | Set Edit Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md | To configure a policy: The **ip-filter** policy now appears in the **Inbound processing** section. +## Get assistance creating policies using Microsoft Copilot for Azure (preview) +++[Microsoft Copilot for Azure](../copilot/overview.md) (preview) provides policy authoring capabilities for Azure API Management. Using Copilot for Azure in the context of API Management's policy editor, you can create policies that match your specific requirements without knowing the syntax or have already configured policies explained to you. This proves particularly useful for handling complex policies with multiple requirements. ++You can prompt Copilot for Azure to generate policy definitions, then copy the results into the policy editor and make any necessary adjustments. Ask questions to gain insights into different options, modify the provided policy, or clarify the policy you already have. [Learn more](../copilot/author-api-management-policies.md) about this capability. ++> [!NOTE] +> Microsoft Copilot for Azure requires [registration](../copilot/limited-access.md#registration-process) (preview) and is currently only available to approved enterprise customers and partners. + ## Configure policies at different scopes API Management gives you flexibility to configure policy definitions at multiple [scopes](api-management-howto-policies.md#scopes), in each of the policy sections. |
app-service | Configure Ssl Certificate In Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md | To follow this how-to guide: In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**. -From the left navigation of your app, select **TLS/SSL settings**, then select **Private Key Certificates (.pfx)** or **Public Key Certificates (.cer)**. +From the left navigation of your app, select **Certificates**, then select **Bring your own certificates (.pfx)** or **Public key certificates (.cer)**. Find the certificate you want to use and copy the thumbprint. |
app-service | Deploy Staging Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md | The slot's URL has the format `http://sitename-slotname.azurewebsites.net`. To k When you swap two slots (usually from a staging slot into the production slot), App Service does the following to ensure that the target slot doesn't experience downtime: -1. Apply the following settings from the target slot (for example, the production slot) to all instances of the source slot: +1. Apply the following settings from the source slot (for example, the production slot) to all instances of the target slot: - [Slot-specific](#which-settings-are-swapped) app settings and connection strings, if applicable. - [Continuous deployment](deploy-continuous-deployment.md) settings, if enabled. - [App Service authentication](overview-authentication-authorization.md) settings, if enabled. - Any of these cases trigger all instances in the source slot to restart. During [swap with preview](#Multi-Phase), this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot's settings. + Any of these cases trigger all instances in the target slot to restart. During [swap with preview](#Multi-Phase), this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot's settings. -1. Wait for every instance in the target slot to complete its restart. If any instance fails to restart, the swap operation reverts all changes to the source slot and stops the operation. +1. Wait for every instance in the source slot to complete its restart. If any instance fails to restart, the swap operation reverts all changes to the source slot and stops the operation. 1. If [local cache](overview-local-cache.md) is enabled, trigger local cache initialization by making an HTTP request to the application root ("/") on each instance of the source slot. Wait until each instance returns any HTTP response. Local cache initialization causes another restart on each instance. |
azure-app-configuration | Concept Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md | Azure relies upon DNS resolution to route connections from the VNet to the confi ## DNS changes for private endpoints -When you create a private endpoint, the DNS CNAME resource record for the configuration store is updated to an alias in a subdomain with the prefix `privatelink`. Azure also creates a [private DNS zone](../dns/private-dns-overview.md) corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. +When you create a private endpoint, the DNS CNAME resource record for the configuration store is updated to an alias in a subdomain with the prefix `privatelink`. Azure also creates a [private DNS zone](../dns/private-dns-overview.md) corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. Enabling geo-replication creates separate DNS records for each replica with unique IP addresses in the private DNS zone. When you resolve the endpoint URL from within the VNet hosting the private endpoint, it resolves to the private endpoint of the store. When resolved from outside the VNet, the endpoint URL resolves to the public endpoint. When you create a private endpoint, the public endpoint is disabled. -If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the service endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `[Your-store-name].privatelink.azconfig.io` (or `[Your-store-name]-[replica-name].privatelink.azconfig.io` for a replica if the geo-replication is enabled) with the private endpoint IP address. --> [!TIP] -> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the store name in the `privatelink` subdomain to the private endpoint IP address. You can do this by delegating the `privatelink` subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records. +If you are using a custom DNS server on your network, you need to configure it to delegate your `privatelink` subdomain to the private DNS zone for the VNet. Alternatively, you can configure the A records for your store's private link URLs, which are either `[Your-store-name].privatelink.azconfig.io` or `[Your-store-name]-[replica-name].privatelink.azconfig.io` if geo-replication is enabled, with unique private IP addresses of the private endpoint. ## Pricing |
azure-app-configuration | Concept Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-snapshots.md | Title: Snapshots in Azure App Configuration (preview) + Title: Snapshots in Azure App Configuration description: Details of Snapshots in Azure App Configuration Previously updated : 05/16/2023 Last updated : 11/15/2023 -# Snapshots (preview) +# Snapshots A snapshot is a named, immutable subset of an App Configuration store's key-values. The key-values that make up a snapshot are chosen during creation time through the usage of key and label filters. Once a snapshot is created, the key-values within are guaranteed to remain unchanged. |
azure-app-configuration | Howto Create Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md | Title: How to manage and use snapshots (preview) in Azure App Configuration + Title: How to manage and use snapshots in Azure App Configuration description: How to manage and use snapshots in an Azure App Configuration store. Previously updated : 09/28/2023 Last updated : 11/15/2023 -# Manage and use snapshots (preview) +# Manage and use snapshots In this article, learn how to create, use and manage snapshots in Azure App Configuration. Snapshot is a set of App Configuration settings stored in an immutable state. In your App Configuration store, go to **Operations** > **Configuration explorer As a temporary workaround, you can switch to using Access keys authentication from either the Configuration explorer or the Feature manager blades. You should then see the Snapshot blade displayed properly, assuming you have permission for the access keys. -Under **Operations** > **Snapshots (preview)**, select **Create a new snapshot**. +Under **Operations** > **Snapshots**, select **Create a new snapshot**. 1. Enter a **snapshot name** and optionally also add **Tags**. 1. Under **Choose the composition type**, keep the default value **Key (default)**. Under **Operations** > **Snapshots (preview)**, select **Create a new snapshot** To create sample snapshots and check how the snapshots feature work, use the snapshot sandbox. This sandbox contains sample data you can play with to better understand how snapshot's composition type and filters work. -1. In **Operations** > **Snapshots (preview)** > **Active snapshots**, select **Test in sandbox**. +1. In **Operations** > **Snapshots** > **Active snapshots**, select **Test in sandbox**. 1. Review the sample data and practice creating snapshots by filling out the form with a composition type and one or more filters. 1. Select **Create** to generate the sample snapshot. 1. Check out the snapshot result generated under **Generated sample snapshot**. The sample snapshot displays all keys that are included in the sample snapshot, according to your selection. spring: ## Manage active snapshots -The page under **Operations** > **Snapshots (preview)** displays two tabs: **Active snapshots** and **Archived snapshots**. Select **Active snapshots** to view the list of all active snapshots in an App Configuration store. +The page under **Operations** > **Snapshots** displays two tabs: **Active snapshots** and **Archived snapshots**. Select **Active snapshots** to view the list of all active snapshots in an App Configuration store. :::image type="content" source="./media/howto-create-snapshots/snapshots-view-list.png" alt-text="Screenshot of the list of active snapshots."::: In the **Active snapshots** tab, select the ellipsis **...** on the right of an ## Manage archived snapshots -Go to **Operations** > **Snapshots (preview)** > **Archived snapshots** to view the list of all archived snapshots in an App Configuration store. Archived snapshots remain accessible for the retention period that was selected during their creation. +Go to **Operations** > **Snapshots** > **Archived snapshots** to view the list of all archived snapshots in an App Configuration store. Archived snapshots remain accessible for the retention period that was selected during their creation. :::image type="content" source="./media/howto-create-snapshots/archived-snapshots.png" alt-text="Screenshot of the list of archived snapshots."::: Detailed view of snapshot is available in the archive state as well. In the **Ar ### Recover an archived snapshot -In the **Archived snapshots** tab, select the ellipsis **...** on the right of an archived snapshot and select **Recover** to recover a snapshot. Confirm App Configuration snapshot recovery by selecting **Yes** or cancel with **No**. Once a snapshot has been recovered, a notification appears to confirm the operation and the list of archived snapshots is updated. +In the **Archived snapshots** tab, select the ellipsis **...** on the right of an archived snapshot and select **Recover** to recover a snapshot. Once a snapshot has been recovered, a notification appears to confirm the operation and the list of archived snapshots is updated. :::image type="content" source="./media/howto-create-snapshots/recover-snapshots.png" alt-text="Screenshot of the recover option in the archived snapshots."::: |
azure-app-configuration | Rest Api Snapshot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-snapshot.md | -# Snapshots +# Snapshot A snapshot is a resource identified uniquely by its name. See details for each operation. Use the optional `$select` query string parameter and provide a comma-separated ```http GET /kv?snapshot={name}&$select=key,value&api-version={api-version} HTTP/1.1-``` +``` |
azure-arc | Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md | Installing the Connected Machine agent for Window applies the following system-w | Service name | Display name | Process name | Description | |--|--|--|-|- | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens | - | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. | - | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. | + | himds | Azure Hybrid Instance Metadata Service | `himds.exe` | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens | + | GCArcService | Guest configuration Arc Service | `gc_arc_service.exe` (gc_service.exe prior to version 1.36) | Audits and enforces Azure guest configuration policies on the machine. | + | ExtensionService | Guest configuration Extension Service | `gc_extension_service.exe` (gc_service.exe prior to version 1.36) | Installs, updates, and manages extensions on the machine. | * Agent installation creates the following virtual service account. Installing the Connected Machine agent for Linux applies the following system-wi The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: * The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.-* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: +* The Extension Service agent can use up to 5% of the CPU on Windows machines and 30% of the CPU on Linux machines to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: | Extension type | Operating system | CPU limit | | -- | - | | | AzureMonitorLinuxAgent | Linux | 60% | | AzureMonitorWindowsAgent | Windows | 100% |- | AzureSecurityLinuxAgent | Linux | 30% | | LinuxOsUpdateExtension | Linux | 60% | | MDE.Linux | Linux | 60% | | MicrosoftDnsAgent | Windows | 100% | |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | The Azure Connected Machine agent receives improvements on an ongoing basis. Thi - Known issues - Bug fixes +## Version 1.32 - July 2023 ++Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a02e-4fbe-94fe-f36219be048c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### New features ++- Added support for the Debian 12 operating system +- [azcmagent show](azcmagent-show.md) now reflects the "Expired" status when a machine has been disconnected long enough for the managed identity to expire. Previously, the agent only showed "Disconnected" while the Azure portal and API showed the correct state, "Expired." ++### Fixed ++- Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure. +- Improved local logging when there are network communication errors + ## Version 1.31 - June 2023 Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | The Azure Connected Machine agent receives improvements on an ongoing basis. To This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md). +## Version 1.36 - November 2023 ++Download for [Windows](https://download.microsoft.com/download/5/e/9/5e9081ed-2ee2-4b3a-afca-a8d81425bcce/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### New features ++- [azcmagent show](azcmagent-show.md) now reports extended security license status on Windows Server 2012 server machines. +- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the Azure Arc-enabled SQL Server endpoints. This will enable you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for Azure Arc-enabled SQL Server. +- The [CPU limit for extension operations](agent-overview.md#agent-resource-governance) on Linux is now 30%. This increase will help improve reliability of extension install, upgrade and uninstall operations. +- Older extension manager and machine configuration agent logs are automatically zipped to reduce disk space requirements. +- New executable names for the extension manager (`gc_extension_service`) and machine configuration (`gc_arc_service`) agents on Windows to help you distinguish the two services. For more information, see [Windows agent installation details](./agent-overview.md#windows-agent-installation-details). ++### Bug fixes ++- [azcmagent connect](azcmagent-connect.md) now uses the latest API version when creating the Azure Arc-enabled server resource to ensure Azure policies targeting new properties can take effect. +- Upgraded the OpenSSL library and PowerShell runtime shipped with the agent to include the latest security fixes. +- Fixed an issue that could prevent the agent from reporting the correct product type on Windows machines. +- Improved handling of upgrades when the previously installed extension version was not in a successful state. + ## Version 1.35 - October 2023 Download for [Windows](https://download.microsoft.com/download/e/7/0/e70b1753-646e-4aea-bac4-40187b5128b0/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) This endpoint will be removed from `azcmagent check` in a future release. - You can now set the [agent mode](security-overview.md#agent-modes) before connecting the agent to Azure. - The agent now responds to instance metadata service (IMDS) requests even when the connection to Azure is temporarily unavailable. -## Version 1.32 - July 2023 --Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a02e-4fbe-94fe-f36219be048c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### New features --- Added support for the Debian 12 operating system-- [azcmagent show](azcmagent-show.md) now reflects the "Expired" status when a machine has been disconnected long enough for the managed identity to expire. Previously, the agent only showed "Disconnected" while the Azure portal and API showed the correct state, "Expired."--### Fixed --- Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure.-- Improved local logging when there are network communication errors- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. |
azure-arc | License Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md | If you choose to license based on physical cores, the licensing requires a minim If you choose to license based on virtual cores, the licensing requires a minimum of eight virtual cores per Virtual Machine. There are two main scenarios where this model is advisable: -1. If the VM is running on a third-party host or hyper scaler like AWS, GCP, or OCI. +1. If the VM is running on a third-party host or cloud service provider like AWS, GCP, or OCI. -1. The Windows Server was licensed on a virtualization basis. In most cases, customers elect the Standard edition for virtual core-based licenses. +1. The Windows Server operating system was licensed on a virtualization basis. An additional scenario (scenario 1, below) is a candidate for VM/Virtual core licensing when the WS2012 VMs are running on a newer Windows Server host (that is, Windows Server 2016 or later). +> [!IMPORTANT] +> Virtual core licensing can't be used on physical servers. When creating a license with virtual cores, always select the standard edition instead of datacenter, even if the operating system is datacenter edition. + ### License limits Each WS2012 ESU license can cover up to and including 10,000 cores. If you need ESUs for more than 10,000 cores, split the total number of cores across multiple licenses. As servers no longer require ESUs because they've been migrated to Azure, Azure > [!NOTE] > This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings.-> ## Scenario based examples: Compliant and Cost Effective Licensing |
azure-arc | Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md | Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure | `Arc` | `his.arc.azure.com`</br>`guestconfiguration.azure.com`</br> `san-af-<location>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com`| | `ArcData` <sup>1</sup> | `san-af-<region>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com` | -<sup>1</sup> To use proxy bypass value `ArcData`, you need a supported Azure Connected Machine agent and a supported Azure Extension for SQL Server version. Releases are supported beginning November, 2023. To see the latest release, check the release notes: - - [Azure Connected Machine Agent](./agent-release-notes.md) - - [Azure extension for SQL Server](/sql/sql-server/azure-arc/release-notes?view=sql-server-ver16&preserve-view=true) -- Later versions are also supported. +<sup>1</sup> The proxy bypass value `ArcData` is available starting with Azure Connected Machine agent version 1.36 and Azure Extension for SQL Server version 1.1.2504.99. Earlier versions include the Azure Arc-enabled SQL Server endpoints in the "Arc" proxy bypass value. To send Microsoft Entra ID and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command: |
azure-boost | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-boost/overview.md | Boost systems embrace multiple layers of defense-in-depth, including ubiquitous Azure Boost uses Security Enhanced Linux (SELinux) to enforce principle of least privileges for all software running on its system on chip. All control plane and data plane software running on top of the Boost OS is restricted to running only with the minimum set of privileges required to operate ΓÇô the operating system restricts any attempt by Boost software to act in an unexpected manner. Boost OS properties make it difficult to compromise code, data, or the availability of Boost and Azure hosting Infrastructure. - **Rust memory safety:**-RUST serves as the primary language for all new code written on the Boost system, to provide memory safety without impacting performance. Control and data plane operations are isolated with memory safety improvements that enhance AzureΓÇÖs ability to keep tenants safe. +Rust serves as the primary language for all new code written on the Boost system, to provide memory safety without impacting performance. Control and data plane operations are isolated with memory safety improvements that enhance AzureΓÇÖs ability to keep tenants safe. - **FIPS certification:** Boost employs a FIPS 140 certified system kernel, providing reliable and robust security validation of cryptographic modules. |
azure-functions | Create First Function Cli Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md | Before you begin, you must have the following requirements in place: + The [Azurite storage emulator](../storage/common/storage-use-azurite.md?tabs=npm#install-azurite). While you can also use an actual Azure Storage account, the article assumes you're using this emulator. ::: zone-end - [!INCLUDE [functions-install-core-tools](../../includes/functions-install-core-tools.md)] ## <a name="create-venv"></a>Create and activate a virtual environment |
azure-functions | Create First Function Vs Code Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md | Before you begin, make sure that you have the following requirements in place: + The [Azurite V3 extension](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite) local storage emulator. While you can also use an actual Azure storage account, this article assumes you're using the Azurite emulator. ::: zone-end - [!INCLUDE [functions-install-core-tools-vs-code](../../includes/functions-install-core-tools-vs-code.md)] ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Functions Container Apps Hosting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md | Title: Azure Container Apps hosting of Azure Functions description: Learn about how you can use Azure Container Apps to host containerized function apps in Azure Functions. Previously updated : 07/30/2023 Last updated : 11/15/2023 # Customer intent: As a cloud developer, I want to learn more about hosting my function apps in Linux containers by using Azure Container Apps. Keep in mind the following considerations when deploying your function app conta + Azure Event Hubs + Kafka* \*The protocol value of `ssl` isn't supported when hosted on Container Apps. Use a [different protocol value](functions-bindings-kafka-trigger.md?pivots=programming-language-csharp#attributes). -+ Dapr is currently enabled by default in the preview release. In a later release, Dapr loading should be configurable. + For the built-in Container Apps [policy definitions](../container-apps/policy-reference.md#policy-definitions), currently only environment-level policies apply to Azure Functions containers. + When using Container Apps, you don't have direct access to the lower-level Kubernetes APIs. -+ Use of user-assigned managed identities is currently supported, and is preferred for accessing Azure Container Registry. For more information, see [Add a user-assigned identity](../app-service/overview-managed-identity.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#add-a-user-assigned-identity). + The `containerapp` extension conflicts with the `appservice-kube` extension in Azure CLI. If you have previously published apps to Azure Arc, run `az extension list` and make sure that `appservice-kube` isn't installed. If it is, you can remove it by running `az extension remove -n appservice-kube`. -+ To invoke DAPR APIs or to run the [Functions Dapr extension](https://github.com/Azure/azure-functions-dapr-extension), make sure the minimum replica count is set to at least `1`. This enables the DAPR sidecar to run in the background to handle DAPR requests. The Functions Dapr extension is also in preview, with help provided [in the repository](https://github.com/Azure/azure-functions-dapr-extension/issues). ++ The Functions Dapr extension is also in preview, with help provided [in the repository](https://github.com/Azure/azure-functions-dapr-extension/issues). ## Next steps |
azure-functions | Functions Develop Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md | These prerequisites are only required to [run and debug your functions locally]( + [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. ::: zone-end ## Create an Azure Functions project You should monitor the execution of your functions by integrating your function To learn more about monitoring using Application Insights, see [Monitor Azure Functions](functions-monitoring.md). - ### Enable emulation in Visual Studio Code Now that you've configured the Terminal with Rosetta to run x86 emulation for Python development, you can use the following steps to integrate this terminal emulation with Visual Studio Code: |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | Title: Python developer reference for Azure Functions description: Understand how to develop functions with Python Previously updated : 05/25/2023 Last updated : 11/14/2023 ms.devlang: python zone_pivot_groups: python-mode-functions Python v1 programming model: You can also create Python v1 functions in the Azure portal. -The following considerations apply for local Python development: --+ Although you can develop your Python-based Azure functions locally on Windows, Python is supported only on a Linux-based hosting plan when it's running in Azure. For more information, see the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime). --+ Functions doesn't currently support local Python function development on ARM64 devices, including on a Mac with an M1 chip. To learn more, see [x86 emulation on ARM64](functions-run-local.md#x86-emulation-on-arm64). +> [!TIP] +> Although you can develop your Python-based Azure functions locally on Windows, Python is supported only on a Linux-based hosting plan when it's running in Azure. For more information, see the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime). ## Programming model |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | Title: Develop Azure Functions locally using Core Tools description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you deploy them to run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 08/24/2023 Last updated : 11/14/2023 zone_pivot_groups: programming-languages-set-functions In the terminal window or from a command prompt, run the following command to cr func init MyProjFolder --worker-runtime dotnet-isolated ``` -By default this command creates a project that runs in-process with the Functons host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For more information, see the [`func init`](functions-core-tools-reference.md#func-init) reference. +By default this command creates a project that runs in-process with the Functions host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For more information, see the [`func init`](functions-core-tools-reference.md#func-init) reference. ### [In-process](#tab/in-process) The following considerations apply to Core Tools installations: + Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. ::: zone-end - When using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code). ## Next steps Learn how to [develop, test, and publish Azure functions by using Azure Function [func azure functionapp publish]: functions-core-tools-reference.md?tabs=v2#func-azure-functionapp-publish -[Long-Term Support (LTS) version of .NET Core]: https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle +[Long-Term Support (LTS) version of .NET Core]: https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle |
azure-maps | How To Dev Guide Csharp Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md | var client = new MapsSearchClient(credential, clientId); You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: Now you can create environment variables in PowerShell to store the subscription key: |
azure-maps | How To Dev Guide Java Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md | public class Demo { You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: Now you can create environment variables in PowerShell to store the subscription key:  |
azure-maps | How To Dev Guide Js Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md | const client = MapsSearch(credential, process.env.MAPS_CLIENT_ID); You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Core Authentication Package]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code. |
azure-maps | How To Dev Guide Py Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md | maps_search_client = MapsSearchClient( You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: Now you can create environment variables in PowerShell to store the subscription key: |
azure-maps | How To Manage Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md | To view your Azure Maps authentication details: 3. Select **Authentication** in the settings section of the left pane. - :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details."::: + :::image type="content" border="false" source="./media/shared/get-key.png" alt-text="Screenshot showing your Azure Maps subscription key in the Azure portal." lightbox="./media/shared/get-key.png"::: ## Choose an authentication category |
azure-maps | How To Secure Daemon App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md | To create a new application registration: 4. Select the **+ New registration** tab. - :::image type="content" border="true" source="./media/how-to-manage-authentication/app-registration.png" alt-text="View app registrations."::: + :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID."::: 5. Enter a **Name**, and then select a **Support account type**. |
azure-maps | How To Secure Device Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md | This guide discusses how to secure public applications or devices that can't sec Create the device based application in Microsoft Entra ID to enable Microsoft Entra sign-in, which is granted access to Azure Maps REST APIs. 1. In the Azure portal, in the list of Azure services, select **Microsoft Entra ID** > **App registrations** > **New registration**. -- :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID"::: + + :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID."::: 2. Enter a **Name**, choose **Accounts in this organizational directory only** as the **Supported account type**. In **Redirect URIs**, specify **Public client / native (mobile & desktop)** then add `https://login.microsoftonline.com/common/oauth2/nativeclient` to the value. For more information, see Microsoft Entra ID [Desktop app that calls web APIs: App registration]. Then **Register** the application. |
azure-maps | How To Secure Spa Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md | Create the web application in Microsoft Entra ID for users to sign in. The web a 1. In the Azure portal, in the list of Azure services, select **Microsoft Entra ID** > **App registrations** > **New registration**. - :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="Screenshot showing the new registration page in the App registrations blade in Microsoft Entra ID."::: + :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID."::: 2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url which Microsoft Entra ID issues the token and is the url where the map control is hosted. For a detailed sample, see [Azure Maps Microsoft Entra ID samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ImplicitGrant). Then select **Register**. |
azure-maps | How To Secure Webapp Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md | You must create the web application in Microsoft Entra ID for users to sign in. 1. In the Azure portal, in the list of Azure services, select **Microsoft Entra ID** > **App registrations** > **New registration**. - :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing App registration." lightbox="./media/how-to-manage-authentication/app-registration.png"::: + :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID."::: 2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url to which Microsoft Entra ID issues the token, which is the url where the map control is hosted. For more information, see Microsoft Entra ID [Scenario: Web app that signs in users](../active-directory/develop/scenario-web-app-sign-user-overview.md). Complete the provided steps from the Microsoft Entra scenario. |
azure-maps | Quick Android Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md | Once your Azure Maps account is successfully created, retrieve the subscription >[!NOTE] > For security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing]. ## Create a project in Android Studio |
azure-maps | Quick Demo Map App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md | Once your Azure Maps account is successfully created, retrieve the subscription 2. In the settings section, select **Authentication**. 3. Copy the **Primary Key** and save it locally to use later in this tutorial. >[!NOTE] > This quickstart uses the [Shared Key] authentication approach for demonstration purposes, but the preferred approach for any production environment is to use [Microsoft Entra ID] authentication. |
azure-maps | Quick Ios App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md | Once your Maps account is successfully created, retrieve the primary key that en <!-- > If you use the Azure subscription key instead of the Azure Maps primary key, your map won't render properly. Also, for security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing](../key-vault/secrets/tutorial-rotation-dual.md) -->-![Get the subscription key.](./media/ios-sdk/quick-ios-app/get-key.png) ## Create a project in Xcode |
azure-monitor | Data Collection Iis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md | To complete this procedure, you need: For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment). -- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. - A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that runs IIS. - An IIS log file in W3C format must be stored on the local drive of the machine on which Azure Monitor Agent is running. - Each entry in the log file must be delineated with an end of line. To create the data collection rule in the Azure portal: :::image type="content" source="media/data-collection-iis/iis-data-collection-rule.png" lightbox="media/data-collection-iis/iis-data-collection-rule.png" alt-text="Screenshot that shows the Azure portal form to select basic performance counters in a data collection rule."::: 1. Specify a file pattern to identify the directory where the log files are located. -1. On the **Destination** tab, add a destinations for the data source. +1. On the **Destination** tab, add a destination for the data source. <!-- convertborder later --> :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" alt-text="Screenshot that shows the Azure portal form to add a data source in a data collection rule." border="false"::: |
azure-monitor | Data Collection Rule Azure Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md | This article describes how to collect events and performance counters from virtu To complete this procedure, you need: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. - Associate the data collection rule to specific virtual machines. ## Create a data collection rule |
azure-monitor | Data Collection Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md | You need: - A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - A [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. ## Syslog record properties |
azure-monitor | Data Collection Text Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md | To complete this procedure, you need: For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment). -- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. -- A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premise client that writes logs to a text or JSON file.+- A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premises client that writes logs to a text or JSON file. Text and JSON file requirements and best practices: - Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored. To create the data collection rule in the Azure portal: - `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents. - `transformKql`: Specifies a [transformation](../logs/../essentials//data-collection-transformations.md) to apply to the incoming data before it's sent to the workspace. - See [Structure of a data collection rule in Azure Monitor](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the data collection rule. + See [Structure of a data collection rule in Azure Monitor](../essentials/data-collection-rule-structure.md) if you want to modify the data collection rule. > [!IMPORTANT] > Custom data collection rules have a prefix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace. |
azure-monitor | Resource Manager Data Collection Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md | This article includes sample [Azure Resource Manager templates](../../azure-reso [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] -## Permissions required --| Built-in Role | Scope(s) | Reason | -|:|:|:| -| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To create or edit data collection rules | -| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy associations (i.e. to assign rules to the machine) | -| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To deploy ARM templates | ## Create rule (sample) |
azure-monitor | Alerts Metric Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md | Title: Creating Metric Alerts for Logs in Azure Monitor description: Tutorial on creating near-real time metric alerts on popular log analytics data. Previously updated : 7/24/2022 Last updated : 11/16/2023 |
azure-monitor | Alerts Troubleshoot Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md | Title: Frequently asked questions about Azure Monitor metric alerts description: Common issues with Azure Monitor metric alerts and possible solutions. Previously updated : 8/31/2022 Last updated : 11/16/2023 ms:reviwer: harelbr # Troubleshoot Azure Monitor metric alerts |
azure-monitor | Azure Cli Metrics Alert Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/azure-cli-metrics-alert-sample.md | Title: Create metric alert monitors in Azure CLI description: Learn how to create metric alerts in Azure Monitor with Azure CLI commands. These samples create alerts for a virtual machine and an App Service Plan. Previously updated : 04/05/2022 Last updated : 11/16/2023 |
azure-monitor | Proactive Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md | Alternatively, you can change the configuration by using Azure Resource Manager These diagnostic tools help you inspect the telemetry from your app: * [Metric explorer](../essentials/metrics-charts.md)-* [Search explorer](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) +* [Search explorer](../app/transaction-search-and-diagnostics.md?tabs=transaction-search) * [Analytics: Powerful query language](../logs/log-analytics-tutorial.md) Smart detection is automatic, but if you want to set up more alerts, see: |
azure-monitor | Proactive Failure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md | Notice that if you delete an Application Insights resource, the associated Failu An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment. -To investigate further, click on 'View full details in Application Insights' the links in this page take you straight to a [search page](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces. +To investigate further, click on 'View full details in Application Insights' the links in this page take you straight to a [search page](../app/transaction-search-and-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces. You can also open the [Azure portal](https://portal.azure.com), navigate to the Application Insights resource for your app, and open the Failures page. Smart Detection of Failure Anomalies complements other similar but distinct feat These diagnostic tools help you inspect the data from your app: * [Metric explorer](../essentials/metrics-charts.md)-* [Search explorer](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) +* [Search explorer](../app/transaction-search-and-diagnostics.md?tabs=transaction-search) * [Analytics - powerful query language](../logs/log-analytics-tutorial.md) Smart detections are automatic. But maybe you'd like to set up some more alerts? |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | In Node.js projects, you can use `new applicationInsights.TelemetryClient(instru ## TrackEvent -In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) as individual occurrences. (It isn't related to MVC or other framework "events.") +In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) as individual occurrences. (It isn't related to MVC or other framework "events.") Insert `TrackEvent` calls in your code to count various events. For example, you might want to track how often users choose a particular feature. Or you might want to know how often they achieve certain goals or make specific types of mistakes. The recommended way to send request telemetry is where the request acts as an <a ## Operation context -You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID. +You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID. For more information on correlation, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). requests Send exceptions to Application Insights: * To [count them](../essentials/metrics-charts.md), as an indication of the frequency of a problem.-* To [examine individual occurrences](./search-and-transaction-diagnostics.md?tabs=transaction-search). +* To [examine individual occurrences](./transaction-search-and-diagnostics.md?tabs=transaction-search). The reports include the stack traces. exceptions ## TrackTrace -Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search). +Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search). In .NET [Log adapters](./asp-net-trace-logs.md), use this API to send third-party logs to the portal. properties.put("Database", db.ID); telemetry.trackTrace("Slow Database response", SeverityLevel.Warning, properties); ``` -In [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search), you can then easily filter out all the messages of a particular severity level that relate to a particular database. +In [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search), you can then easily filter out all the messages of a particular severity level that relate to a particular database. ### Traces in Log Analytics appInsights.setAuthenticatedUserContext(validatedId, accountId); In [Metrics Explorer](../essentials/metrics-charts.md), you can create a chart that counts **Users, Authenticated**, and **User accounts**. -You can also [search](./search-and-transaction-diagnostics.md?tabs=transaction-search) for client data points with specific user names and accounts. +You can also [search](./transaction-search-and-diagnostics.md?tabs=transaction-search) for client data points with specific user names and accounts. > [!NOTE] > The [EnableAuthenticationTrackingJavaScript property in the ApplicationInsightsServiceOptions class](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) in the .NET Core SDK simplifies the JavaScript configuration needed to inject the user name as the Auth ID for each trace sent by the Application Insights JavaScript SDK. Azure alerts are only on metrics. Create a custom metric that crosses a value th ## <a name="next"></a>Next steps -* [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) +* [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search) |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | What's the difference between telemetry processors and telemetry initializers? * [JavaScript SDK](https://github.com/Microsoft/ApplicationInsights-JS) ## <a name="next"></a>Next steps-* [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) +* [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search) * [sampling](./sampling.md) |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Application Insights provides many experiences to enhance the performance, relia - [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance. - [Application map](app-map.md): A visual overview of application architecture and components' interactions. - [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.-- [Transaction search](search-and-transaction-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance.+- [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance. - [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints. - Performance view: Review application performance metrics and potential bottlenecks. - Failures view: Identify and analyze failures in your application to minimize downtime. Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/we - [Application dashboard](overview-dashboard.md) - [Application Map](app-map.md) - [Live metrics](live-stream.md)-- [Transaction search](search-and-transaction-diagnostics.md?tabs=transaction-search)+- [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search) - [Availability overview](availability-overview.md) - [Users, sessions, and events](usage-segmentation.md) |
azure-monitor | App Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md | To provide feedback, use the feedback option. ## Next steps * To learn more about how correlation works in Application Insights, see [Telemetry correlation](distributed-tracing-telemetry-correlation.md).-* The [end-to-end transaction diagnostic experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) correlates server-side telemetry from across all your Application Insights-monitored components into a single view. +* The [end-to-end transaction diagnostic experience](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) correlates server-side telemetry from across all your Application Insights-monitored components into a single view. * For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md). |
azure-monitor | Application Insights Asp Net Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md | See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap View your telemetry: - [Explore metrics](../essentials/metrics-charts.md) to monitor performance and usage.-- [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) to diagnose problems.+- [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search) to diagnose problems. - [Use Log Analytics](../logs/log-query-overview.md) for more advanced queries. - [Create dashboards](./overview-dashboard.md). |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | In the preceding cases, the proper way of validating that the instrumentation en ## Where to find dependency data * [Application Map](app-map.md) visualizes dependencies between your app and neighboring components.-* [Transaction Diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) shows unified, correlated server data. +* [Transaction Diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) shows unified, correlated server data. * [Browsers tab](javascript.md) shows AJAX calls from your users' browsers. * Select from slow or failed requests to check their dependency calls. * [Analytics](#logs-analytics) can be used to query dependency data. Like every Application Insights SDK, the dependency collection module is also op ## Dependency auto-collection -Below is the currently supported list of dependency calls that are automatically detected as dependencies without requiring any additional modification to your application's code. These dependencies are visualized in the Application Insights [Application map](./app-map.md) and [Transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) views. If your dependency isn't on the list below, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency). +Below is the currently supported list of dependency calls that are automatically detected as dependencies without requiring any additional modification to your application's code. These dependencies are visualized in the Application Insights [Application map](./app-map.md) and [Transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) views. If your dependency isn't on the list below, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency). ### .NET |
azure-monitor | Asp Net Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md | To get diagnostic data specific to your app, you can insert code to send your ow Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available: -* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./search-and-transaction-diagnostics.md?tabs=transaction-search). +* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./transaction-search-and-diagnostics.md?tabs=transaction-search). * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information. * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights. -To see these events, on the left menu, open [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**. +To see these events, on the left menu, open [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**. :::image type="content" source="./media/asp-net-exceptions/customevents.png" lightbox="./media/asp-net-exceptions/customevents.png" alt-text="Screenshot that shows the Search screen."::: Catch ex as Exception End Try ``` -The properties and measurements parameters are optional, but they're useful for [filtering and adding](./search-and-transaction-diagnostics.md?tabs=transaction-search) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary. +The properties and measurements parameters are optional, but they're useful for [filtering and adding](./transaction-search-and-diagnostics.md?tabs=transaction-search) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary. ## Browser exceptions |
azure-monitor | Asp Net Trace Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md | Perhaps your application sends voluminous amounts of data and you're using the A ## <a name="add"></a>Next steps * [Diagnose failures and exceptions in ASP.NET](asp-net-exceptions.md)-* [Learn more about Transaction Search](search-and-transaction-diagnostics.md?tabs=transaction-search) +* [Learn more about Transaction Search](transaction-search-and-diagnostics.md?tabs=transaction-search) * [Set up availability and responsiveness tests](availability-overview.md) <!--Link references--> [availability]: ./availability-overview.md-[diagnostic]: ./search-and-transaction-diagnostics.md?tabs=transaction-search +[diagnostic]: ./transaction-search-and-diagnostics.md?tabs=transaction-search [exceptions]: asp-net-exceptions.md [start]: ./app-insights-overview.md |
azure-monitor | Availability Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md | To create a new file, right-click under your timer trigger function (for example * [Standard tests](availability-standard-tests.md) * [Availability alerts](availability-alerts.md) * [Application Map](./app-map.md)-* [Transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) +* [Transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) |
azure-monitor | Availability Standard Tests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md | From an availability test result, you can see the transaction details across all * Log an issue or work item in Git or Azure Boards to track the problem. The bug will contain a link to this event. * Open the web test result in Visual Studio. -To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics). +To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics). Select the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code-level diagnostics. |
azure-monitor | Configuration With Applicationinsights Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md | Configure a [snapshot collection for ASP.NET applications](snapshot-debugger-vm. [api]: ./api-custom-events-metrics.md [client]: ./javascript.md-[diagnostic]: ./search-and-transaction-diagnostics.md?tabs=transaction-search +[diagnostic]: ./transaction-search-and-diagnostics.md?tabs=transaction-search [exceptions]: ./asp-net-exceptions.md [netlogs]: ./asp-net-trace-logs.md [new]: ./create-workspace-resource.md |
azure-monitor | Create Workspace Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md | You need the connection strings of all the resources to which your app will send ### Filter on the build number When you publish a new version of your app, you'll want to be able to separate the telemetry from different builds. -You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results. +You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/transaction-search-and-diagnostics.md?tabs=transaction-search) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results. There are several different methods of setting the **Application Version** property. To track the application version, make sure your Microsoft Build Engine process </PropertyGroup> ``` -When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md). +When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/transaction-search-and-diagnostics.md?tabs=transaction-search) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md). The build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio. |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | When you instrument message deletion, make sure you set the operation (correlati ### Dependency types -Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics): +Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics): - `Azure queue` for Azure Storage queues - `Azure Event Hubs` for Azure Event Hubs Each Application Insights operation (request or dependency) involves `Activity`. ## Next steps - Learn the basics of [telemetry correlation](distributed-tracing-telemetry-correlation.md) in Application Insights.-- Check out how correlated data powers [transaction diagnostics experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) and [Application Map](./app-map.md).+- Check out how correlated data powers [transaction diagnostics experience](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) and [Application Map](./app-map.md). - See the [data model](./data-model-complete.md) for Application Insights types and data model. - Report custom [events and metrics](./api-custom-events-metrics.md) to Application Insights. - Check out standard [configuration](configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet) for context properties collection. |
azure-monitor | Distributed Trace Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-trace-data.md | -Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact. +Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact. [Application Insights](app-insights-overview.md#application-insights-overview) can monitor each component separately and detect which component is responsible for failures or performance degradation by using distributed telemetry correlation. This article explains the data model, context-propagation techniques, protocols, and implementation of correlation tactics on different languages and platforms used by Application Insights. |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | export const clickPluginConfigWithUseDefaultContentNameOrId = { <div className="test1" data-id="test1parent"> <div>Test1</div>- <div><small>with id, data-id, parent data-id defined</small></div> + <div>with id, data-id, parent data-id defined</div> <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button> </div> ``` export const clickPluginConfigWithParentDataTag = { <div className="test2" data-group="buttongroup1" data-id="test2parent"> <div>Test2</div>- <div><small>with data-id, parentid, parent data-id defined</small></div> + <div>with data-id, parentid, parent data-id defined</div> <Button data-id="test2id" data-parentid = "parentid2" variant="info" onClick={trackEvent}>Test2</Button> </div> ``` export const clickPluginConfigWithParentDataTag = { <div className="test6" data-group="buttongroup1" data-id="test6grandparent"> <div>Test6</div>- <div><small>with data-id, grandparent data-group defined, parent data-id defined</small></div> + <div>with data-id, grandparent data-group defined, parent data-id defined</div> <div data-id="test6parent"> <Button data-id="test6id" variant="info" onClick={trackEvent}>Test6</Button> </div> |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | If you open the Live Metrics pane, the SDKs switch to a higher frequency mode an ## Next steps * [Monitor usage with Application Insights](./usage-overview.md)-* [Use Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) +* [Use Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) * [Profiler](./profiler.md) * [Snapshot Debugger](./snapshot-debugger.md) |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | Because the SDK batches data for submission, there might be a delay before items * Continue to use the application. Take more actions to generate more telemetry. * Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately. * Verify that [required outgoing ports](./ip-addresses.md) are open.-* Use [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) to look for specific events. +* Use [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) to look for specific events. * Check the [FAQ][FAQ]. ## Basic usage |
azure-monitor | Opentelemetry Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md | A direct exporter sends telemetry in-process (from the application's code) direc *The currently available Application Insights SDKs and Azure Monitor OpenTelemetry Distros rely on a direct exporter*. -Alternatively, sending application telemetry via an agent like OpenTelemetry-Collector can have some benefits including sampling, post-processing, and more. Azure Monitor is developing an agent and ingestion endpoint that supports [Open Telemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md), providing a path for any OpenTelemetry-supported programming language beyond our [supported languages](platforms.md) to use to Azure Monitor. - > [!NOTE] > For Azure Monitor's position on the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md), see the [OpenTelemetry FAQ](./opentelemetry-enable.md#can-i-use-the-opentelemetry-collector). Alternatively, sending application telemetry via an agent like OpenTelemetry-Col ## OpenTelemetry -Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages. +Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages. Microsoft worked with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/). Together, we helped to create a single project, OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). Microsoft is a Platinum Member of the CNCF. |
azure-monitor | Release And Work Item Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md | To delete, go to in your Application Insights resource under *Configure* select ## See also * [Azure Pipelines documentation](/azure/devops/pipelines)-* [Create work items](./search-and-transaction-diagnostics.md?tabs=transaction-search#create-work-item) +* [Create work items](./transaction-search-and-diagnostics.md?tabs=transaction-search#create-work-item) * [Automation with PowerShell](./powershell.md) * [Availability test](availability-overview.md) |
azure-monitor | Sampling Classic Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling-classic-api.md | Ingestion sampling doesn't work alongside adaptive or fixed-rate sampling. Adapt **Use fixed-rate sampling if:** -* You need synchronized sampling between client and server to navigate between related events. For example, page views and HTTP requests in [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) while investigating events. +* You need synchronized sampling between client and server to navigate between related events. For example, page views and HTTP requests in [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) while investigating events. * You're confident of the appropriate sampling percentage for your app. It should be high enough to get accurate metrics, but below the rate that exceeds your pricing quota and the throttling limits. **Use adaptive sampling:** |
azure-monitor | Transaction Search And Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-search-and-diagnostics.md | + + Title: Transaction Search and Diagnostics +description: This article explains Application Insights end-to-end transaction diagnostics and how to search and filter raw telemetry sent by your web app. + Last updated : 11/16/2023++++# Transaction Search and Diagnostics ++Azure Monitor Application Insights offers Transaction Search for pinpointing specific telemetry items and Transaction Diagnostics for comprehensive end-to-end transaction analysis. ++**Transaction Search**: This experience enables users to locate and examine individual telemetry items such as page views, exceptions, and web requests. Additionally, it offers the capability to view log traces and events coded into the application. It identifies performance issues and errors within the application. ++**Transaction Diagnostics**: Quickly identify issues in components through comprehensive insight into end-to-end transaction details, including dependencies and exceptions. Access this feature via the Search interface by choosing an item from the search results. ++## [Transaction Search](#tab/transaction-search) ++Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. You can also view log traces and events that you code. ++For more complex queries over your data, use [Log Analytics](../logs/log-analytics-tutorial.md). ++## Where do you see Search? ++You can find **Search** in the Azure portal or Visual Studio. ++### In the Azure portal ++You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu. +++Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events you coded. The top of the **Results** list has a summary chart showing counts of events over time. ++Back out of the dropdown menu or select **Refresh** to get new events. ++### In Visual Studio ++In Visual Studio, there's also an **Application Insights Search** window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal. ++Open the **Application Insights Search** window in Visual Studio: +++The **Application Insights Search** window has features similar to the web portal: +++The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view. ++## Inspect individual items ++Select any telemetry item to see key fields and related items. +++The end-to-end transaction details view opens. ++## Filter event types ++Open the **Event types** dropdown menu and choose the event types you want to see. If you want to restore the filters later, select **Reset**. ++The event types are: ++* **Trace**: [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls. +* **Request**: HTTP requests received by your server application including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts. +* **Page View**: [Telemetry sent by the web client](./javascript.md) used to create page view reports. +* **Custom Event**: If you inserted calls to `TrackEvent()` to [monitor usage](./api-custom-events-metrics.md), you can search them here. +* **Exception**: Uncaught [exceptions in the server](./asp-net-exceptions.md), and the exceptions that you log by using `TrackException()`. +* **Dependency**: [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md). +* **Availability**: Results of [availability tests](availability-overview.md) ++## Filter on property values ++You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** :::image type="content" source="./media/search-and-transaction-diagnostics/filter-icon.png" lightbox="./media/search-and-transaction-diagnostics/filter-icon.png" alt-text="Filter icon"::: to start. ++Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property. ++Notice that the counts to the right of the filter values show how many occurrences there are in the current filtered set. ++## Find events with the same property ++To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab. +++## Search the data ++> [!NOTE] +> To write more complex queries, open [Logs (Analytics)](../logs/log-analytics-tutorial.md) at the top of the **Search** pane. +> ++You can search for terms in any of the property values. This capability is useful if you write [custom events](./api-custom-events-metrics.md) with property values. ++You might want to set a time range because searches over a shorter range are faster. +++Search for complete words, not substrings. Use quotation marks to enclose special characters. ++| String | *Not* found | Found | +| | | | +| HomeController.About |`home`<br/>`controller`<br/>`out` | `homecontroller`<br/>`about`<br/>`"homecontroller.about"`| +|United States|`Uni`<br/>`ted`|`united`<br/>`states`<br/>`united AND states`<br/>`"united states"` ++You can use the following search expressions: ++| Sample query | Effect | +| | | +| `apple` |Find all events in the time range whose fields include the word `apple`. | +| `apple AND banana` <br/>`apple banana` |Find events that contain both words. Use capital `AND`, not `and`. <br/>Short form. | +| `apple OR banana` |Find events that contain either word. Use `OR`, not `or`. | +| `apple NOT banana` |Find events that contain one word but not the other. | ++## Sampling ++If your app generates significant telemetry and uses ASP.NET SDK version 2.0.0-beta3 or later, it automatically reduces the volume sent to the portal through adaptive sampling. This module sends only a representative fraction of events. It selects or deselects events related to the same request as a group, allowing you to navigate between related events. ++Learn about [sampling](./sampling.md). ++## Create work item ++You can create a bug in GitHub or Azure DevOps with the details from any telemetry item. ++Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**. +++The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab. ++## Send more telemetry to Application Insights ++In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can: ++* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events. ++* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions. ++Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md). ++## <a name="questions"></a>Frequently asked questions ++Find answers to common questions. ++### <a name="limits"></a>How much data is retained? ++See the [Limits summary](../service-limits.md#application-insights). ++### How can I see POST data in my server requests? ++We don't log the POST data automatically, but you can use [TrackTrace or log calls](./asp-net-trace-logs.md). Put the POST data in the message parameter. You can't filter on the message in the same way you can filter on properties, but the size limit is longer. ++### Why does my Azure Function search return no results? ++Azure Functions doesn't log URL query strings. ++## [Transaction Diagnostics](#tab/transaction-diagnostics) ++The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure. ++## What is a component? ++Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components. ++* Components are different from "observed" external dependencies, such as SQL and event hubs, which your team or organization might not have access to (code or telemetry). +* Components run on any number of server, role, or container instances. +* Components can be separate Application Insights instrumentation keys, even if subscriptions are different. Components also can be different roles that report to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they were set up. ++> [!NOTE] +> Are you missing the related item links? All the related telemetry is on the left side in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections. ++## Transaction diagnostics experience ++This view has four key parts: ++- a results list +- a cross-component transaction chart +- a time-sequence list of all telemetry related to this operation +- the details pane for any selected telemetry item +++## Cross-component transaction chart ++This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline. ++- The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete. +- Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type. +- Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component. +- By default, the request, dependency, or exception that you selected appears to the side. Select any row to see its [details](#details-of-the-selected-telemetry). ++> [!NOTE] +> Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them. ++## All telemetry with this Operation ID ++This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the side](#details-of-the-selected-telemetry). +++## Details of the selected telemetry ++This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes. +++## Search results ++This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions. +++## Profiler and Snapshot Debugger ++[Application Insights Profiler](./profiler.md) or [Snapshot Debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see Profiler traces or snapshots from any component with a single selection. ++If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com. ++If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com. +++## Frequently asked questions ++This section provides answers to common questions. ++### Why do I see a single component on the chart and the other components only show as external dependencies without any details? ++Potential reasons: ++* Are the other components instrumented with Application Insights? +* Are they using the latest stable Application Insights SDK? +* If these components are separate Application Insights resources, validate you have [access](resources-roles-access-control.md). +If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the feedback channel in the upper-right corner. ++### I see duplicate rows for the dependencies. Is this behavior expected? ++Currently, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different because of the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback! ++### What about clock skews across different component instances? ++Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Log Analytics. ++### Why is the new experience missing most of the related items queries? ++This behavior is by design. All the related items, across all components, are already available on the left side in the top and bottom sections. The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline. ++### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK? ++The transaction diagnostics experience shows all telemetry in a [single operation](distributed-tracing-telemetry-correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event is generated and a single Operation ID is used for all telemetry generated. As a result, many events might be correlated to the same operation. ++In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID. ++### Why do transaction detail durations not add up to the top-request duration? ++Time not explained in the Gantt chart is time that isn't covered by a tracked dependency. This issue can occur because external calls weren't instrumented, either automatically or manually. It can also occur because the time taken was in process rather than because of an external call. ++If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md). ++### What if I see the message ***Error retrieving data*** while navigating Application Insights in the Azure portal? ++This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error continues and needs more investigation, [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while reproducing the unexpected portal behavior, then open a support case from the Azure portal. ++++## See also ++* [Write complex queries in Analytics](../logs/log-analytics-tutorial.md) +* [Send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md) +* [Availability overview](availability-overview.md) |
azure-monitor | Container Insights Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md | With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to ## Repair duplicate agents -Customers who manually Container Insights using custom methods prior to October 2022 can end up with multiple versions of our agent running together. To clear this duplication, customers are recommended to follow the steps below: +Customers who manually enable Container Insights using custom methods prior to October 2022 can end up with multiple versions of our agent running together. To clear this duplication, customers are recommended to follow the steps below: ### Migration guidelines for AKS clusters Current ama-logs default limit are below Validate whether the current default settings and limits meet the customer's needs. And if not, create support tickets under containerinsights agent to help investigate and toggle memory/cpu limits for the customer. Through doing this, it can help address the scale limitations issues that some customers encountered previously that resulted in OOMKilled exceptions. -4. Fetch current Azure analytic workspace ID since we're going to re-onboard the container insights. +3. Fetch current Azure analytic workspace ID since we're going to re-onboard the container insights. ```console az aks show -g $resourceGroupNameofCluster -n $nameofTheCluster | grep logAnalyticsWorkspaceResourceID` ``` -6. Clean resources from previous onboarding: +4. Clean resources from previous onboarding: **For customers that previously onboarded to containerinsights through helm chart** : curl -LO raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/kubernetes/o kubectl delete -f omsagent.yaml ``` -7. Disable container insights to clean all related resources with aks command: [Disable Container insights on your Azure Kubernetes Service (AKS) cluster - Azure Monitor | Microsoft Learn](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-optout) +5. Disable container insights to clean all related resources with aks command: [Disable Container insights on your Azure Kubernetes Service (AKS) cluster - Azure Monitor | Microsoft Learn](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-optout) ```console az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG ``` -8. Re-onboard to containerinsights with the workspace fetched from step 3 using [the steps outlined here](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#specify-a-log-analytics-workspace) +6. Re-onboard to containerinsights with the workspace fetched from step 3 using [the steps outlined here](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#specify-a-log-analytics-workspace) |
azure-monitor | Data Collection Rule Create Edit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-create-edit.md | + + Title: Create and edit data collection rules (DCRs) in Azure Monitor +description: Details on creating and editing data collection rules (DCRs) in Azure Monitor. +++ Last updated : 11/15/2023+++++# Create and edit data collection rules (DCRs) in Azure Monitor +There are multiple methods for creating a [data collection rule (DCR)](./data-collection-rule-overview.md) in Azure Monitor. In some cases, Azure Monitor will create and manage the DCR according to settings that you configure in the Azure portal. In other cases, you might need to create your own DCRs to customize particular scenarios. ++This article describes the different methods for creating and editing a DCR. For the contents of the DCR itself, see [Structure of a data collection rule in Azure Monitor](./data-collection-rule-structure.md). ++## Permissions + You require the following permissions to create DCRs and associations: ++| Built-in role | Scopes | Reason | +|:|:|:| +| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Create or edit DCRs, assign rules to the machine, deploy associations). | +| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Azure Arc-enabled servers</li></ul> | Deploy agent extensions on the VM. | +| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Deploy Azure Resource Manager templates. | ++## Automated methods to create a DCR +The following table lists methods to create data collection scenarios using the Azure portal where the DCR is created for you. In these cases you don't need to interact directly with the DCR itself. ++| Scenario | Resources | Description | +|:|:|:| +| Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then associate that rule with one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. | +| | [Enable VM insights overview](../vm/vminsights-enable-overview.md) | When you enable VM insights on a VM, the Azure Monitor agent is installed, and a DCR is created that collects a predefined set of performance counters. You shouldn't modify this DCR. | +| Container insights | [Enable Container insights](../containers/prometheus-metrics-enable.md) | When you enable Container insights on a Kubernetes cluster, a containerized version of the Azure Monitor agent is installed, and a DCR is created that collects data according to the configuration you selected. You may need to modify this DCR to add a transformation. | +| Text or JSON logs | [Collect logs from a text or JSON file with Azure Monitor Agent](../agents/data-collection-text-log.md?tabs=portal) | Use the Azure portal to create a DCR to collect entries from a text log on a machine with Azure Monitor Agent. | +| Workspace transformation | [Add a transformation in a workspace data collection rule using the Azure portal](../logs/tutorial-workspace-transformations-portal.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. | +++## Manually create a DCR +To manually create a DCR, create a JSON file using the appropriate configuration for the data collection that you're configuring. Start with one of the [sample DCRs](./data-collection-rule-samples.md) and use information in [Structure of a data collection rule in Azure Monitor](./data-collection-rule-structure.md) to modify the JSON file for your particular environment and requirements. ++Once you have the JSON file created, you can use any of the following methods to create the DCR: ++## [CLI](#tab/CLI) +Use the [az monitor data-collection rule create](/cli/azure/monitor/data-collection/rule) command to create a DCR from your JSON file using the Azure CLI as shown in the following example. ++```azurecli +az monitor data-collection rule create --location 'eastus' --resource-group 'my-resource-group' --name 'myDCRName' --rule-file 'C:\MyNewDCR.json' --description 'This is my new DCR' +``` ++## [PowerShell](#tab/powershell) +Use the [New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule) cmdlet to create the DCR from your JSON file using PowerShell as shown in the following example. ++```powershell +New-AzDataCollectionRule -Location 'east-us' -ResourceGroupName 'my-resource-group' -RuleName 'myDCRName' -RuleFile 'C:\MyNewDCR.json' -Description 'This is my new DCR' +``` +++## [API](#tab/api) +Use the [DCR create API](/rest/api/monitor/data-collection-rules/create) to create the DCR from your JSON file. You can use any method to call a REST API as shown in the following examples. +++```powershell +$ResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Insights/dataCollectionRules/my-dcr" +$FilePath = ".\my-dcr.json" +$DCRContent = Get-Content $FilePath -Raw +Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method PUT -Payload $DCRContent +``` +++```azurecli +ResourceId="/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Insights/dataCollectionRules/my-dcr" +FilePath="my-dcr.json" +az rest --method put --url $ResourceId"?api-version=2021-09-01-preview" --body @$FilePath +``` +++## [ARM](#tab/arm) +Using an ARM template, you can define parameters so you can provide particular values at the time you install the DCR. This allows you to use a single template for multiple installations. Use the following template, copying in the JSON for your DCR and adding any other parameters you want to use. ++See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for different methods to deploy ARM templates. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "dataCollectionRuleName": { + "type": "string", + "metadata": { + "description": "Specifies the name of the Data Collection Rule to create." + } + }, + "location": { + "type": "string", + "metadata": { + "description": "Specifies the location in which to create the Data Collection Rule." + } + } + }, + "resources": [ + { + "type": "Microsoft.Insights/dataCollectionRules", + "name": "[parameters('dataCollectionRuleName')]", + "location": "[parameters('location')]", + "apiVersion": "2021-09-01-preview", + "properties": { + "<dcr-properties>" + } + } + ] +} ++``` +++The following tutorials include examples of manually creating DCRs. ++- [Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](../logs/tutorial-logs-ingestion-api.md) +- [Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md) ++## Edit a DCR +To edit a DCR, you can use any of the methods described in the previous section to create a DCR using a modified version of the JSON. ++If you need to retrieve the JSON for an existing DCR, you can copy it from the **JSON View** for the DCR in the Azure portal. You can also retrieve it using an API call as shown in the following PowerShell example. ++```powershell +$ResourceId = "<ResourceId>" # Resource ID of the DCR to edit +$FilePath = "<FilePath>" # Store DCR content in this file +$DCR = Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method GET +$DCR.Content | ConvertFrom-Json | ConvertTo-Json -Depth 20 | Out-File -FilePath $FilePath +``` ++For a tutorial that walks through the process of retrieving and then editing an existing DCR, see [Tutorial: Edit a data collection rule (DCR)](./data-collection-rule-edit.md). ++## Next steps ++- [Read about the detailed structure of a data collection rule](data-collection-rule-structure.md) +- [Get details on transformations in a data collection rule](data-collection-transformations.md) |
azure-monitor | Data Collection Rule Edit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-edit.md | Title: Tutorial - Editing Data Collection Rules description: This article describes how to make changes in Data Collection Rule definition using command line tools and simple API calls. - Previously updated : 07/17/2023 Last updated : 11/03/2023 -# Tutorial: Editing Data Collection Rules -This tutorial describes how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools. +# Tutorial: Edit a data collection rule (DCR) +This tutorial describes how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools. In this tutorial, you learn how to: > [!div class="checklist"] In this tutorial, you learn how to: > * Apply changes to a Data Collection Rule using ARM API call > * Automate the process of DCR update using PowerShell scripts +> [!NOTE] +> This tutorial walks through one method for editing an existing DCR. See [Create and edit data collection rules (DCRs) in Azure Monitor](data-collection-rule-create-edit.md) for other methods. + ## Prerequisites To complete this tutorial you need the following: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Permissions to create Data Collection Rule objects](data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create Data Collection Rule objects](data-collection-rule-create-edit.md#permissions) in the workspace. - Up to date version of PowerShell. Using Azure Cloud Shell is recommended. ## Overview of tutorial |
azure-monitor | Data Collection Rule Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md | description: Overview of data collection rules (DCRs) in Azure Monitor including Previously updated : 08/08/2023 Last updated : 11/15/2023 # Data collection rules in Azure Monitor-Data collection rules (DCRs) define the [data collection process in Azure Monitor](../essentials/data-collection.md). DCRs specify what data should be collected, how to transform that data, and where to send that data. Some DCRs will be created and managed by Azure Monitor to collect a specific set of data to enable insights and visualizations. You might also create your own DCRs to define the set of data required for other scenarios. +Data collection rules (DCRs) are sets of instructions supporting [data collection in Azure Monitor](../essentials/data-collection.md). They provide a consistent and centralized way to define and customize different data collection scenarios. Depending on the scenario, DCRs specify such details as what data should be collected, how to transform that data, and where to send it. ++DCRs are stored in Azure so that you can centrally manage them. Different components of a data collection workflow will access the DCR for particular information that it requires. In some cases, you can use the Azure portal to configure data collection, and Azure Monitor will create and manage the DCR for you. Other scenarios will require you to create your own DCR. You may also choose to customize an existing DCR to meet your required functionality. +++## Basic operation +One example of how DCRs are used is the Logs Ingestion API that allows you to send custom data to Azure Monitor. This scenario is illustrated in the following diagram. Prior to using the API, you create a DCR that defines the structure of the data that you're going to send and the Log Analytics workspace and table that will receive the data. If the data needs to be formatted before it's stored, you can include a [transformation](data-collection-transformations.md) in the DCR. ++Each call to the API specifies the DCR to use, and Azure Monitor references this DCR to determine what to do with the incoming data. If your requirements change, you can modify the DCR without making any changes to the application sending the data. +++## Data collection rule associations (DCRAs) +Data collection rule associations (DCRAs) associate a DCR with an object being monitored, for example a virtual machine with the Azure Monitor agent (AMA). A single object can be associated with multiple DCRs, and a single DCR can be associated with multiple objects. ++The following diagram illustrates data collection for the Azure Monitor agent. When the agent is installed, it connects to Azure Monitor to retrieve any DCRs that are associated with it. It then references the data sources section of each DCR to determine what data to collect from the machine. When the agent delivers this data, Azure Monitor references other sections of the DCR to determine whether a transformation should be applied to it and then the workspace and table to send it to. +++ ## View data collection rules+There are multiple ways to view the DCRs in your subscription. ++### [Portal](#tab/portal) To view your DCRs in the Azure portal, select **Data Collection Rules** under **Settings** on the **Monitor** menu. ++Select a DCR to view its details. For DCRs supporting VMs, you can view and modify its associations and the data that it collects. For other DCRs, use the **JSON view** to view the details of the DCR. See [Create and edit data collection rules (DCRs) in Azure Monitor](./data-collection-rule-create-edit.md) for details on how you can modify them. + > [!NOTE]-> Although this view shows all DCRs in the specified subscriptions, selecting the **Create** button will create a data collection for Azure Monitor Agent. Similarly, this page will only allow you to modify DCRs for Azure Monitor Agent. For guidance on how to create and update DCRs for other workflows, see [Create a data collection rule](#create-a-data-collection-rule). +> Although this view shows all DCRs in the specified subscriptions, selecting the **Create** button will create a data collection for Azure Monitor Agent. Similarly, this page will only allow you to modify DCRs for Azure Monitor Agent. For guidance on how to create and update DCRs for other workflows, see [Create and edit data collection rules (DCRs) in Azure Monitor](./data-collection-rule-create-edit.md). +### [PowerShell](#tab/powershell) +Use [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule) to retrieve the DCRs in your subscription. -## Create a data collection rule -The following resources describe different scenarios for creating DCRs. In some cases, the DCR might be created for you. In other cases, you might need to create and edit it yourself. -| Scenario | Resources | Description | -|:|:|:| -| Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then apply that rule to one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. | -| | [Use Azure Policy to install Azure Monitor Agent and associate with a DCR](../agents/azure-monitor-agent-manage.md#use-azure-policy) | Use Azure Policy to install Azure Monitor Agent and associate one or more DCRs with any virtual machines or virtual machine scale sets as they're created in your subscription. -| Text logs | [Configure custom logs by using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs by using Azure Resource Manager templates and the REST API](../logs/tutorial-logs-ingestion-api.md)<br>[Configure text logs by using Azure Monitoring Agent](../agents/data-collection-text-log.md) | Send custom data by using a REST API or Agent. The API call connects to a data collection endpoint and specifies a DCR to use. The agent uses the DCR to configure the collection of data on a machine. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. | -| Azure Event Hubs | [Ingest events from Azure Event Hubs to Azure Monitor Logs](../logs/ingest-logs-event-hub.md)| Collect data from multiple sources to an event hub and ingest the data you need directly into tables in one or more Log Analytics workspaces. This is a highly scalable method of collecting data from a wide range of sources with minimum configuration.| -| Workspace transformation | [Configure ingestion-time transformations by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Configure ingestion-time transformations by using Azure Resource Manager templates and the REST API](../logs/tutorial-workspace-transformations-api.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. | +```powershell +Get-AzDataCollectionRule +``` -## Work with data collection rules -To work with DCRs outside of the Azure portal, see the following resources: +Use [Get-azDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation) to retrieve the DCRs associated with a VM. -| Method | Resources | -|:|:| -| API | Directly edit the DCR in any JSON editor and then [install it by using the REST API](/rest/api/monitor/datacollectionrules). | -| CLI | Create DCRs and associations with the [Azure CLI](https://github.com/Azure/azure-cli-extensions/blob/master/src/monitor-control-service/README.md). | -| PowerShell | Work with DCRs and associations with the following Azure PowerShell cmdlets:<br>[Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule)<br>[New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule)<br>[Set-AzDataCollectionRule](/powershell/module/az.monitor/set-azdatacollectionrule)<br>[Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule)<br>[Remove-AzDataCollectionRule](/powershell/module/az.monitor/remove-azdatacollectionrule)<br>[Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation)<br>[New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation)<br>[Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation) +```powershell +get-azDataCollectionRuleAssociation -TargetResourceId /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm | foreach {Get-azDataCollectionRule -RuleId $_.DataCollectionRuleId } +``` -## Structure of a data collection rule -Data collection rules are formatted in JSON. Although you might not need to interact with them directly, there are scenarios where you might need to directly edit a DCR. For a description of this structure and the different elements used for different workflows, see [Data collection rule structure](data-collection-rule-structure.md). +### [CLI](#tab/cli) +Use [az monitor data-collection rule](/cli/azure/monitor/data-collection/rule) to work the DCRs using Azure CLI. -## Permissions -When you use programmatic methods to create DCRs and associations, you require the following permissions: +Use the following to return all DCRs in your subscription. -| Built-in role | Scopes | Reason | -|:|:|:| -| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Create or edit DCRs, assign rules to the machine, deploy associations). | -| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Azure Arc-enabled servers</li></ul> | Deploy agent extensions on the VM. | -| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Deploy Azure Resource Manager templates. | +```azurecli +az monitor data-collection rule list +``` -## Limits -For limits that apply to each DCR, see [Azure Monitor service limits](../service-limits.md#data-collection-rules). +Use the following to return DCR associations for a VM. ++```azurecli +az monitor data-collection rule association list --resource "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm " +``` + ## Supported regions Data collection rules are available in all public regions where Log Analytics workspaces and the Azure Government and China clouds are supported. Air-gapped clouds aren't yet supported. Data collection rules are available in all public regions where Log Analytics wo **Single region data residency** is a preview feature to enable storing customer data in a single region and is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and the Brazil South (Sao Paulo State) Region of the Brazil Geo. Single-region residency is enabled by default in these regions. ## Data resiliency and high availability-A rule gets created and stored in a particular region and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region. For this reason, it's a *zone-redundant service*, which further increases availability. +A DCR gets created and stored in a particular region and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region. For this reason, it's a *zone-redundant service*, which further increases availability. ## Next steps+See the following articles for additional information on how to work with DCRs. -- [Read about the detailed structure of a data collection rule](data-collection-rule-structure.md)-- [Get details on transformations in a data collection rule](data-collection-transformations.md)+- [Data collection rule structure](data-collection-rule-structure.md) for a description of the JSON structure of DCRs and the different elements used for different workflows. +- [Sample data collection rules (DCRs)](data-collection-rule-samples.md) for sample DCRs for different data collection scenarios. +- [Create and edit data collection rules (DCRs) in Azure Monitor](./data-collection-rule-create-edit.md) for different methods to create DCRs for different data collection scenarios. +- [Azure Monitor service limits](../service-limits.md#data-collection-rules) for limits that apply to each DCR. |
azure-monitor | Data Collection Rule Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-samples.md | + + Title: Sample data collection rules (DCRs) in Azure Monitor +description: Sample data collection rule for different Azure Monitor data collection scenarios. + Last updated : 11/15/2023++++++# Sample data collection rules (DCRs) in Azure Monitor +This article includes sample [data collection rules (DCRs)](./data-collection-rule-overview.md) for different scenarios. For descriptions of each of the properties in these DCRs, see [Data collection rule structure](./data-collection-rule-structure.md). ++## Azure Monitor agent - events and performance data +The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is for virtual machines with [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) and has the following details: ++- Performance data + - Collects specific Processor, Memory, Logical Disk, and Physical Disk counters every 15 seconds and uploads every minute. + - Collects specific Process counters every 30 seconds and uploads every 5 minutes. +- Windows events + - Collects Windows security events and uploads every minute. + - Collects Windows application and system events and uploads every 5 minutes. +- Syslog + - Collects Debug, Critical, and Emergency events from cron facility. + - Collects Alert, Critical, and Emergency events from syslog facility. +- Destinations + - Sends all data to a Log Analytics workspace named centralWorkspace. ++> [!NOTE] +> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries). +++```json +{ + "location": "eastus", + "properties": { + "dataSources": { + "performanceCounters": [ + { + "name": "cloudTeamCoreCounters", + "streams": [ + "Microsoft-Perf" + ], + "scheduledTransferPeriod": "PT1M", + "samplingFrequencyInSeconds": 15, + "counterSpecifiers": [ + "\\Processor(_Total)\\% Processor Time", + "\\Memory\\Committed Bytes", + "\\LogicalDisk(_Total)\\Free Megabytes", + "\\PhysicalDisk(_Total)\\Avg. Disk Queue Length" + ] + }, + { + "name": "appTeamExtraCounters", + "streams": [ + "Microsoft-Perf" + ], + "scheduledTransferPeriod": "PT5M", + "samplingFrequencyInSeconds": 30, + "counterSpecifiers": [ + "\\Process(_Total)\\Thread Count" + ] + } + ], + "windowsEventLogs": [ + { + "name": "cloudSecurityTeamEvents", + "streams": [ + "Microsoft-Event" + ], + "scheduledTransferPeriod": "PT1M", + "xPathQueries": [ + "Security!*" + ] + }, + { + "name": "appTeam1AppEvents", + "streams": [ + "Microsoft-Event" + ], + "scheduledTransferPeriod": "PT5M", + "xPathQueries": [ + "System!*[System[(Level = 1 or Level = 2 or Level = 3)]]", + "Application!*[System[(Level = 1 or Level = 2 or Level = 3)]]" + ] + } + ], + "syslog": [ + { + "name": "cronSyslog", + "streams": [ + "Microsoft-Syslog" + ], + "facilityNames": [ + "cron" + ], + "logLevels": [ + "Debug", + "Critical", + "Emergency" + ] + }, + { + "name": "syslogBase", + "streams": [ + "Microsoft-Syslog" + ], + "facilityNames": [ + "syslog" + ], + "logLevels": [ + "Alert", + "Critical", + "Emergency" + ] + } + ] + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "centralWorkspace" + } + ] + }, + "dataFlows": [ + { + "streams": [ + "Microsoft-Perf", + "Microsoft-Syslog", + "Microsoft-Event" + ], + "destinations": [ + "centralWorkspace" + ] + } + ] + } + } +``` ++## Azure Monitor agent - text logs +The sample data collection rule below is used to collect [text logs using Azure Monitor agent](../agents/data-collection-text-log.md). ++```json +{ + "location": "eastus", + "properties": { + "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint", + "streamDeclarations": { + "Custom-MyLogFileFormat": { + "columns": [ + { + "name": "TimeGenerated", + "type": "datetime" + }, + { + "name": "RawData", + "type": "string" + } + ] + } + }, + "dataSources": { + "logFiles": [ + { + "streams": [ + "Custom-MyLogFileFormat" + ], + "filePatterns": [ + "C:\\JavaLogs\\*.log" + ], + "format": "text", + "settings": { + "text": { + "recordStartTimestampFormat": "ISO 8601" + } + }, + "name": "myLogFileFormat-Windows" + }, + { + "streams": [ + "Custom-MyLogFileFormat" + ], + "filePatterns": [ + "//var//*.log" + ], + "format": "text", + "settings": { + "text": { + "recordStartTimestampFormat": "ISO 8601" + } + }, + "name": "myLogFileFormat-Linux" + } + ] + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "MyDestination" + } + ] + }, + "dataFlows": [ + { + "streams": [ + "Custom-MyLogFileFormat" + ], + "destinations": [ + "MyDestination" + ], + "transformKql": "source", + "outputStream": "Custom-MyTable_CL" + } + ] + } +} +``` ++## Event Hubs +The sample data collection rule below is used to collect [data from an event hub](../logs/ingest-logs-event-hub.md). ++```json +{ + "location": "eastus", + "properties": { + "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint", + "streamDeclarations": { + "Custom-MyEventHubStream": { + "columns": [ + { + "name": "TimeGenerated", + "type": "datetime" + }, + { + "name": "RawData", + "type": "string" + }, + { + "name": "Properties", + "type": "dynamic" + } + ] + } + }, + "dataSources": { + "dataImports": { + "eventHub": { + "consumerGroup": "<consumer-group>", + "stream": "Custom-MyEventHubStream", + "name": "myEventHubDataSource1" + } + } + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "MyDestination" + } + ] + }, + "dataFlows": [ + { + "streams": [ + "Custom-MyEventHubStream" + ], + "destinations": [ + "MyDestination" + ], + "transformKql": "source", + "outputStream": "Custom-MyTable_CL" + } + ] + } +} +``` ++## Logs ingestion API +The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is used with the [Logs ingestion API](../logs/logs-ingestion-api-overview.md). It has the following details: ++- Sends data to a table called MyTable_CL in a workspace called my-workspace. +- Applies a [transformation](../essentials//data-collection-transformations.md) to the incoming data. +++```json +{ + "location": "eastus", + "properties": { + "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint", + "streamDeclarations": { + "Custom-MyTable": { + "columns": [ + { + "name": "Time", + "type": "datetime" + }, + { + "name": "Computer", + "type": "string" + }, + { + "name": "AdditionalContext", + "type": "string" + } + ] + } + }, + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/cefingestion/providers/microsoft.operationalinsights/workspaces/my-workspace", + "name": "LogAnalyticsDest" + } + ] + }, + "dataFlows": [ + { + "streams": [ + "Custom-MyTable" + ], + "destinations": [ + "LogAnalyticsDest" + ], + "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, ExtendedColumn=tostring(jsonContext.CounterName)", + "outputStream": "Custom-MyTable_CL" + } + ] + } +} +``` ++## Workspace transformation DCR +The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is used as a +[workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr) to transform all data sent to a table called *LAQueryLogs*. ++```json +{ + "location": "eastus", + "properties": { + "destinations": { + "logAnalytics": [ + { + "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace", + "name": "clv2ws1" + } + ] + }, + "dataFlows": [ + { + "streams": [ + "Microsoft-Table-LAQueryLogs" + ], + "destinations": [ + "clv2ws1" + ], + "transformKql": "source |where QueryText !contains 'LAQueryLogs' | extend Context = parse_json(RequestContext) | extend Resources_CF = tostring(Context['workspaces']) |extend RequestContext = ''" + } + ] + } +} +``` +++## Next steps ++- [Get details for the different properties in a DCR](../essentials/data-collection-rule-structure.md) +- [See different methods for creating a DCR](../essentials/data-collection-rule-create-edit.md) + |
azure-monitor | Data Collection Rule Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md | description: Details on the structure of different kinds of data collection rule Previously updated : 08/08/2023 Last updated : 11/15/2023 ms.reviwer: nikeist # Structure of a data collection rule in Azure Monitor+[Data collection rules (DCRs)](data-collection-rule-overview.md) are sets of instructions that determine how to collect and process telemetry sent to Azure Monitor. Some DCRs will be created and managed by Azure Monitor. This article describes the JSON properties of DCRs for creating and editing them in those cases where you need to work with them directly. -[Data collection rules (DCRs)](data-collection-rule-overview.md) determine how to collect and process telemetry sent to Azure. Some DCRs will be created and managed by Azure Monitor. You might create other DCRs to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing DCRs in those cases where you need to work with them directly. +- See [Create and edit data collection rules (DCRs) in Azure Monitor](data-collection-rule-create-edit.md) for details working with the JSON described here. +- See [Sample data collection rules (DCRs) in Azure Monitor](../essentials/data-collection-rule-samples.md) for sample DCRs for different scenarios. -## Custom logs -A DCR for [API based custom logs](../logs/logs-ingestion-api-overview.md) contains the following sections. For a sample, see [Sample data collection rule - custom logs](../logs/data-collection-rule-sample-custom-logs.md). -### streamDeclarations -This section contains the declaration of all the different types of data that will be sent via the HTTP endpoint directly into Log Analytics. Each stream is an object whose: +## `dataCollectionEndpointId` +Specifies the [data collection endpoint (DCE)](data-collection-endpoint-overview.md) used by the DCR. -- Key represents the stream name, which must begin with *Custom-*.-- Value is the full list of top-level properties that are contained in the JSON data that will be sent.+**Scenarios** +- Azure Monitor agent +- Logs ingestion API +- Events Hubs + -The shape of the data you send to the endpoint doesn't need to match that of the destination table. Instead, the output of the transform that's applied on top of the input data needs to match the destination shape. The possible data types that can be assigned to the properties are `string`, `int`, `long`, `real`, `boolean`, `dynamic`, and `datetime`. +## `streamDeclarations` +Declaration of the different types of data sent into the Log Analytics workspace. Each stream is an object whose key represents the stream name, which must begin with *Custom-*. The stream contains a full list of top-level properties that are contained in the JSON data that will be sent. The shape of the data you send to the endpoint doesn't need to match that of the destination table. Instead, the output of the transform that's applied on top of the input data needs to match the destination shape. -### destinations -This section contains a declaration of all the destinations where the data will be sent. Only Log Analytics is currently supported as a destination. Each Log Analytics destination requires the full workspace resource ID and a friendly name that will be used elsewhere in the DCR to refer to this workspace. +This section isn't used for data sources sending known data types such as events and performance data sent from Azure Monitor agent. -### dataFlows -This section ties the other sections together. It defines the following properties for each stream declared in the `streamDeclarations` section: +The possible data types that can be assigned to the properties are: -- `destination` from the `destinations` section where the data will be sent.-- `transformKql` section, which is the [transformation](data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.-- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of `outputStream` has the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream.+- `string` +- `int` +- `long` +- `real` +- `boolean` +- `dynamic` +- `datetime`. -> [!Note] -> -> You can only send logs from one specific data source to one workspace. To send data from a single data source to multiple workspaces, please create one DCR per workspace. +**Scenarios** +- Azure Monitor agent (text logs only) +- Logs ingestion API +- Event Hubs -## Azure Monitor Agent - A DCR for [Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the following sections. For a sample, see [Sample data collection rule - agent](../agents/data-collection-rule-sample-agent.md). For agent based custom logs, see [Sample Custom Log Rules - Agent](../agents/data-collection-text-log.md) +## `destinations` +Declaration of all the destinations where the data will be sent. Only `logAnalytics` is currently supported as a destination except for Azure Monitor agent which can also use `azureMonitorMetrics`. Each Log Analytics destination requires the full workspace resource ID and a friendly name that will be used elsewhere in the DCR to refer to this workspace. -### dataSources -This unique source of monitoring data has its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and Syslog. Each data source matches a particular data source type as described in the following table. +**Scenarios** +- Azure Monitor agent (text logs only) +- Logs ingestion API +- Event Hubs +- Workspace transformation DCR -Each data source has a data source type. Each type defines a unique set of properties that must be specified for each data source. The data source types currently available appear in the following table. +## `dataSources` +Unique source of monitoring data that has its own format and method of exposing its data. Each data source has a data source type, and each type defines a unique set of properties that must be specified for each data source. The data source types currently available are listed in the following table. | Data source type | Description | |:|:|+| eventHub | Data from Azure Event Hubs | | extension | VM extension-based data source, used exclusively by Log Analytics solutions and Azure services ([View agent supported services and solutions](../agents/azure-monitor-agent-overview.md#supported-services-and-features)) |-| performanceCounters | Performance counters for both Windows and Linux | -| syslog | Syslog events on Linux | -| windowsEventLogs | Windows event log | +| logFiles | Text log on a virtual machine | +| performanceCounters | Performance counters for both Windows and Linux virtual machines | +| syslog | Syslog events on Linux virtual machines | +| windowsEventLogs | Windows event log on virtual machines | -### Streams - This unique handle describes a set of data sources that will be transformed and schematized as one type. Each data source requires one or more streams, and one stream can be used by multiple data sources. All data sources in a stream share a common schema. Use multiple streams, for example, when you want to send a particular data source to multiple tables in the same Log Analytics workspace. +**Scenarios** +- Azure Monitor agent +- Event Hubs +++## `dataFlows` +Matches streams with destinations and optionally specifies a transformation. ++### `dataFlows/Streams` +One or more streams defined in the previous section. You may include multiple streams in a single data flow if you want to send multiple data sources to the same destination. Only use a single stream though if the data flow includes a transformation. One stream can also be used by multiple data flows when you want to send a particular data source to multiple tables in the same Log Analytics workspace. ++### `dataFlows/destinations` +One or more destinations from the `destinations` section above. Multiple destinations are allowed for multi-homing scenarios. ++### `dataFlows/transformKql` +Optional [transformation](data-collection-transformations.md) applied to the incoming stream. The transformation must understand the schema of the incoming data and output data in the schema of the target table. If you use a transformation, the data flow should only use a single stream. ++### `dataFlows/outputStream` +Describes which table in the workspace specified under the `destination` property the data will be sent to. The value of `outputStream` has the format `Microsoft-[tableName]` when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom table. Only one destination is allowed per stream.<br><br>This property isn't used for known data sources from Azure Monitor such as events and performance data since these are sent to predefined tables. | ++**Scenarios** ++- Azure Monitor agent +- Logs ingestion API +- Event Hubs +- Workspace transformation DCR -### destinations -This set of destinations indicates where the data should be sent. Examples include Log Analytics workspace and Azure Monitor Metrics. Multiple destinations are allowed for multi-homing scenarios. -### dataFlows -The definition indicates which streams should be sent to which destinations. ## Next steps |
azure-monitor | Aiops Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/aiops-machine-learning.md | Last updated 02/28/2023 # Detect and mitigate potential issues using AIOps and machine learning in Azure Monitor Artificial Intelligence for IT Operations (AIOps) offers powerful ways to improve service quality and reliability by using machine learning to process and automatically act on data you collect from applications, services, and IT resources into Azure Monitor. -Azure Monitor's built-in AIOps capabilities provide insights and automate data-driven tasks, such as predicting capacity usage and autoscaling, identifying and analyzing application performance issues, and detecting anomalous behaviors in virtual machines, containers, and other resources. These features boost your IT monitoring and operations, without requiring machine learning knowledge and further investment. +Azure Monitor's built-in AIOps capabilities provide insights and help you troubleshoot issues and automate data-driven tasks, such as predicting capacity usage and autoscaling, identifying and analyzing application performance issues, and detecting anomalous behaviors in virtual machines, containers, and other resources. These features boost your IT monitoring and operations, without requiring machine learning knowledge and further investment. Azure Monitor also provides tools that let you create your own machine learning pipeline to introduce new analysis and response capabilities and act on data in Azure Monitor Logs. This article describes Azure Monitor's built-in AIOps capabilities and explains |Monitoring scenario|Capability|Description| |-|-|-|-|Log monitoring|[Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md) | A curated monitoring experience that provides a unified view of your Log Analytics workspaces and uses machine learning to detect ingestion anomalies. | -||[Kusto Query Language (KQL) time series analysis and machine learning functions](../logs/kql-machine-learning-azure-monitor.md)| Easy-to-use tools for generating time series data, detecting anomalies, forecasting, and performing root cause analysis directly in Azure Monitor Logs without requiring in-depth knowledge of data science and programming languages. +|Log monitoring|[Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md) | Provides a unified view of your Log Analytics workspaces and uses machine learning to detect ingestion anomalies. | +||[Kusto Query Language (KQL) time series analysis and machine learning functions](../logs/kql-machine-learning-azure-monitor.md)| Easy-to-use tools for generating time series data, detecting anomalies, forecasting, and performing root cause analysis directly in Azure Monitor Logs without requiring in-depth knowledge of data science and programming languages. | +||[Microsoft Copilot for Azure](/azure/copilot/get-monitoring-information)| Helps you use Log Analytics to analyze data and troubleshoot issues. Generates example KQL queries based on prompts, such as "Are there any errors in container logs?". | |Application performance monitoring|[Application Map Intelligent view](../app/app-map.md)| Maps dependencies between services and helps you spot performance bottlenecks or failure hotspots across all components of your distributed application.| ||[Smart detection](../alerts/proactive-diagnostics.md)|Analyzes the telemetry your application sends to Application Insights, alerts on performance problems and failure anomalies, and identifies potential root causes of application performance issues.| |Metric alerts|[Dynamic thresholds for metric alerting](../alerts/alerts-dynamic-thresholds.md)| Learns metrics patterns, automatically sets alert thresholds based on historical data, and identifies anomalies that might indicate service issues.| |
azure-monitor | Create Custom Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md | To create a custom table, you need: - A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md).-- A JSON file with the schema of your custom table in the following format:+- A JSON file with at least one record of sample for your custom table. This will look similar to the following: + ```json [+ { + "TimeGenerated": "supported_datetime_format", + "<column_name_1>": "<column_name_1_value>", + "<column_name_2>": "<column_name_2_value>" + }, + { + "TimeGenerated": "supported_datetime_format", + "<column_name_1>": "<column_name_1_value>", + "<column_name_2>": "<column_name_2_value>" + }, { "TimeGenerated": "supported_datetime_format", "<column_name_1>": "<column_name_1_value>", To create a custom table, you need: ] ``` - For information about the `TimeGenerated` format, see [supported datetime formats](/azure/data-explorer/kusto/query/scalar-data-types/datetime#supported-formats). + All tables in a Log Analytics workspace must have a column named `TimeGenerated`. If your sample data has a column named `TimeGenerated`, then this value will be used to identify the ingestion time of the record. If not, a `TimeGenerated` column will be added to the transformation in your DCR for the table. For information about the `TimeGenerated` format, see [supported datetime formats](/azure/data-explorer/kusto/query/scalar-data-types/datetime#supported-formats). ## Create a custom table To create a custom table in the Azure portal: :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot showing custom log table name."::: -1. Select **Browse for files** and locate the JSON file in which you defined the schema of your new table. +1. Select **Browse for files** and locate the JSON file with the sample data for your new table. :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-browse-files.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-browse-files.png" alt-text="Screenshot showing custom log browse for files."::: - All log tables in Azure Monitor Logs must have a `TimeGenerated` column populated with the timestamp of the logged event. + If your sample data doesn't include a `TimeGenerated` column, then you will receive a message that a transformation is being created with this column. 1. If you want to [transform log data before ingestion](../essentials//data-collection-transformations.md) into your table: You can delete any table in your Log Analytics workspace that's not an [Azure ta > [!NOTE] > - Deleting a restored table doesn't delete the data in the source table.-> - Azure tables that are part of a solution can be removed from workspace when [deleting the solution](https://learn.microsoft.com/cli/azure/monitor/log-analytics/solution?view=azure-cli-latest#az-monitor-log-analytics-solution-delete). The data remains in workspace for the duration of the retention policy defined for the tables. If the [solution is re-created](https://learn.microsoft.com/cli/azure/monitor/log-analytics/solution?view=azure-cli-latest#az-monitor-log-analytics-solution-create) in the workspace, these tables become visible again. +> - Azure tables that are part of a solution can be removed from workspace when [deleting the solution](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-delete). The data remains in workspace for the duration of the retention policy defined for the tables. If the [solution is re-created](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-create) in the workspace, these tables become visible again. # [Portal](#tab/azure-portal-2) |
azure-monitor | Custom Logs Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md | The Log Ingestion API provides the following advantages over the Data Collector The migration procedure described in this article assumes you have: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create data collection rules](../essentials/data-collection-rule-overview.md#permissions) in the Log Analytics workspace.+- [Permissions to create data collection rules](../essentials/data-collection-rule-create-edit.md#permissions) in the Log Analytics workspace. - [A Microsoft Entra application to authenticate API calls](../logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application) or any other Resource Manager authentication scheme. ## Create new resources required for the Log ingestion API |
azure-monitor | Logs Ingestion Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md | Title: Logs Ingestion API in Azure Monitor description: Send data to a Log Analytics workspace using REST API or client libraries. Previously updated : 09/14/2023 Last updated : 11/15/2023 # Logs Ingestion API in Azure Monitor -The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace using either a [REST API call](#rest-api-call) or [client libraries](#client-libraries). By using this API, you can send data to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column) to accept additional data. -+The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace using either a [REST API call](#rest-api-call) or [client libraries](#client-libraries). The API allows you to send data to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can also [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column) to accept additional data. ## Basic operation+Data can be sent to the Logs Ingestion API from any application that can make a REST API call. This may be a custom application that you create, or it may be an application or agent that understands how to send data to the API. +The application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your Azure subscription. It specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that includes the target table and workspace and the credentials of an app registration with access to the specified DCR. -Your application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call: --- Specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that understands the format of the source data.-- Potentially filters and transforms the data for the target table.-- Directs the data to a specific table in a specific workspace.+The data sent by your application to the API must be formatted in JSON and match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure. You can modify the target table and workspace by modifying the DCR without any change to the API call or source data. -You can modify the target table and workspace by modifying the DCR without any change to the API call or source data. --> [!NOTE] -> To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md). --## Components --The Log ingestion API requires the following components to be created before you can send data. Each of these components must all be located in the same region. --| Component | Description | -|:|:| -| Data collection endpoint (DCE) | The DCE provides an endpoint for the application to send to. A single DCE can support multiple DCRs. | -| Data collection rule (DCR) | [Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The API call must specify a DCR to use. The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can include a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions. -| Log Analytics workspace | The Log Analytics workspace contains the tables that will receive the data. The target tables are specific in the DCR. See [Support tables](#supported-tables) for the tables that the ingestion API can send to. | ## Supported tables-The following tables can receive data from the ingestion API. ++Data sent to the ingestion API can be sent to the following tables: | Tables | Description | |:|:|-| Custom tables | The Logs Ingestion API can send data to any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. | -| Azure tables | The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent) +| Custom tables | Any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. | +| Azure tables | The following Azure tables are currently supported. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent) > [!NOTE] > Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names. Custom columns you add to an Azure table must have the suffix `_CF`. -## Authentication +## Configuration +The following table describes each component in Azure that you must configure before you can use the Logs Ingestion API. -Authentication for the Logs Ingestion API is performed at the DCE, which uses standard Azure Resource Manager authentication. A common strategy is to use an application ID and application key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs](tutorial-logs-ingestion-portal.md). +> [!NOTE] +> For a PowerShell script that automates the configuration of these components, see [Sample code to send data to Azure Monitor using Logs ingestion API](../logs/tutorial-logs-ingestion-code.md). -### Token audience +| Component | Function | +|:|:| +| App registration and secret | The application registration is used to authenticate the API call. It must be granted permission to the DCR described below. The API call includes the **Application (client) ID** and **Directory (tenant) ID** of the application and the **Value** of an application secret.<br><br>See [Create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) and [Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret). | +| Data collection endpoint (DCE) | The DCE provides an endpoint for the application to send to. A single DCE can support multiple DCRs, so you can use an existing DCE if you already have one in the same region as your Log Analytics workspace.<br><br>See [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). | +| Table in Log Analytics workspace | The table in the Log Analytics workspace must exist before you can send data to it. You can use one of the [supported Azure tables](#supported-tables) or create a custom table using any of the available methods. If you use the Azure portal to create the table, then the DCR is created for you, including a transformation if it's required. With any other method, you need to create the DCR manually as described in the next section.<br><br>See [Create a custom table](create-custom-table.md#create-a-custom-table). | +| Data collection rule (DCR) | Azure Monitor uses the [Data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to understand the structure of the incoming data and what to do with it. If the structure of the table and the incoming data don't match, the DCR can include a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions.<br><br>If you create a custom table using the Azure portal, the DCR and the transformation are created for you based on sample data that you provide. If you use an existing table or create a custom table using another method, then you must manually create the DCR using details in the following section.<br><br>Once your DCR is created, you must grant access to it for the application that you created in the first step. From the **Monitor** menu in the Azure portal, select **Data Collection rules** and then the DCR that you created. Select **Access Control (IAM)** for the DCR and then select **Add role assignment** to add the **Monitoring Metrics Publisher** role. | -When developing a custom client to obtain an access token from Microsoft Entra ID for the purpose of submitting telemetry to Log Ingestion API in Azure Monitor, refer to the table provided below to determine the appropriate audience string for your particular host environment. -| Azure cloud version | Token audience value | -| | | -| Azure public cloud | `https://monitor.azure.com` | -| Microsoft Azure operated by 21Vianet cloud | `https://monitor.azure.cn` | -| Azure US Government cloud | `https://monitor.azure.us` | +## **Manually create DCR** +If you're sending data to a table that already exists, then you must create the DCR manually. Start with the [Sample DCR for Logs Ingestion API](../essentials/data-collection-rule-samples.md#logs-ingestion-api) and modify the following parameters in the template. Then use any of the methods described in [Create and edit data collection rules (DCRs) in Azure Monitor](../essentials/data-collection-rule-create-edit.md) to create the DCR. -## Source data +| Parameter | Description | +|:|:| +| `region` | Region to create your DCR. This must match the region of the DCE and the Log Analytics workspace. | +| `dataCollectionEndpointId` | Resource ID of your DCE. | +| `streamDeclarations` | Change the column list to the columns in your incoming data. You don't need to change the name of the stream since this just needs to match the `streams` name in `dataFlows`. | +| `workspaceResourceId` | Resource ID of your Log Analytics workspace. You don't need to change the name since this just needs to match the `destinations` name in `dataFlows`. | +| `transformKql` | KQL query to be applied to the incoming data. If the schema of the incoming data matches the schema of the table, then you can use `source` for the transformation which will pass on the incoming data unchanged. Otherwise, use a query that will transform the data to match the table schema. | +| `outputStream` | Name of the table to send the data. For a custom table, add the prefix *Custom-\<table-name\>*. For a built-in table, add the prefix *Microsoft-\<table-name\>*. | -The source data sent by your application is formatted in JSON and must match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure. -## Client libraries -You can use the following client libraries to send data to the Logs ingestion API: +++## Client libraries +In addition to making a REST API call, you can use the following client libraries to send data to the Logs ingestion API. The libraries require the same components described in [Configuration](#configuration). For examples using each of these libraries, see [Sample code to send data to Azure Monitor using Logs ingestion API](../logs/tutorial-logs-ingestion-code.md). - [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azingest) You can use the following client libraries to send data to the Logs ingestion AP - [Python](/python/api/overview/azure/monitor-ingestion-readme) ## REST API call-To send data to Azure Monitor with a REST API call, make a POST call to the DCE over HTTP. Details of the call are described in the following sections. +To send data to Azure Monitor with a REST API call, make a POST call over HTTP. Details required for this call are described in this section. ### Endpoint URI-The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#custom-logs) in the DCR that should handle the custom data. ++The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. The immutable ID is generated for the DCR when it's created. You can retrieve it from the [JSON view of the DCR in the Azure portal](../essentials/data-collection-rule-overview.md?tabs=portal#view-data-collection-rules). `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#streamdeclarations) in the DCR that should handle the custom data. ``` {Data Collection Endpoint URI}/dataCollectionRules/{DCR Immutable ID}/streams/{Stream Name}?api-version=2021-11-01-preview ``` -> [!NOTE] -> You can retrieve the immutable ID from the JSON view of the DCR. For more information, see [Collect information from the DCR](tutorial-logs-ingestion-portal.md#collect-information-from-the-dcr). +For example: ++``` +https://my-dce-5kyl.eastus-1.ingest.monitor.azure.com/dataCollectionRules/dcr-000a00a000a00000a000000aa000a0aa/streams/Custom-MyTable?api-version=2021-11-01-preview +``` ### Headers -| Header | Required? | Value | Description | -|:|:|:|:| -| Authorization | Yes | Bearer (bearer token obtained through the client credentials flow) | | -| Content-Type | Yes | `application/json` | | -| Content-Encoding | No | `gzip` | Use the gzip compression scheme for performance optimization. | -| x-ms-client-request-id | No | String-formatted GUID | Request ID that can be used by Microsoft for any troubleshooting purposes. | +The following table describes that headers for your API call. +++| Header | Required? |Description | +|:|:|:| +| Authorization | Yes | Bearer token obtained through the client credentials flow. Use the token audience value for your cloud:<br><br>Azure public cloud - `https://monitor.azure.com`<br>Microsoft Azure operated by 21Vianet cloud - `https://monitor.azure.cn`<br>Azure US Government cloud - `https://monitor.azure.us` | +| Content-Type | Yes | `application/json` | +| Content-Encoding | No | `gzip` | +| x-ms-client-request-id | No | String-formatted GUID. This is a request ID that can be used by Microsoft for any troubleshooting purposes. | ### Body -The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Additionally, it is important to ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission. +The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission. ++For example: +```json +{ + "TimeGenerated": "2023-11-14 15:10:02", + "Column01": "Value01", + "Column02": "Value02" +} +``` +### Example +See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md?tabs=powershell#sample-code) for an example of the API call using PowerShell. ## Limits and restrictions |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | The steps required to configure the Logs ingestion API are as follows: To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. ## Collect workspace details |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | The steps required to configure the Logs ingestion API are as follows: To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. - PowerShell 7.2 or later. ## Overview of the tutorial |
azure-monitor | Tutorial Workspace Transformations Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-api.md | In this tutorial, you learn to: To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. - The table must already have some data. - The table can't already be linked to the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr). |
azure-monitor | Tutorial Workspace Transformations Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md | In this tutorial, you learn how to: To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace. - A table that already has some data. - The table can't be linked to the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr). |
azure-monitor | Monitor Virtual Machine Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md | Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables Use a rule with the following query: + ```kusto Heartbeat | summarize TimeGenerated=max(TimeGenerated) by Computer, _ResourceId | extend Duration = datetime_diff('minute',now(),TimeGenerated)-| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId +| summarize MinutesSinceLastHeartbeat = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId ``` ### CPU alerts This section describes CPU alerts. **CPU utilization** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Processor" and Name == "UtilizationPercentage"-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId +| summarize CPUPercentageAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ``` ### Memory alerts This section describes memory alerts. **Available memory in MB** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Memory" and Name == "AvailableMB"-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId +| summarize AvailableMemoryInMBAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ``` **Available memory in percentage** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Memory" and Name == "AvailableMB" | extend TotalMemory = toreal(todynamic(Tags)["vm.azm.ms/memorySizeMB"]) | extend AvailableMemoryPercentage = (toreal(Val) / TotalMemory) * 100.0-| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId -``` +| summarize AvailableMemoryInPercentageAverage = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId +``` ### Disk alerts This section describes disk alerts. **Logical disk used - all disks on each computer** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId +| summarize LogicalDiskSpacePercentageFreeAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ``` **Logical disk used - individual disks** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk +| summarize LogicalDiskSpacePercentageFreeAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk ``` **Logical disk IOPS** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "TransfersPerSecond" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk +| summarize DiskIOPSAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk ``` **Logical disk data rate** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "BytesPerSecond" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk +| summarize DiskBytesPerSecondAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk ``` ### Network alerts InsightsMetrics **Network interfaces bytes received - all interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond"-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId +| summarize BytesReceivedAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ``` **Network interfaces bytes received - individual interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface +| summarize BytesReceievedAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface ``` **Network interfaces bytes sent - all interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "WriteBytesPerSecond"-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId +| summarize BytesSentAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ``` **Network interfaces bytes sent - individual interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "WriteBytesPerSecond" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface +| summarize BytesSentAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface ``` ### Windows and Linux events The following sample creates an alert when a specific Windows event is created. It uses a metric measurement alert rule to create a separate alert for each computer. - **Create an alert rule on a specific Windows event.**- This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0. + ```kusto Event | where EventLog == "Application" | where EventID == 123 - | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m) + | summarize NumberOfEvents = count() by Computer, bin(TimeGenerated, 15m) ``` - **Create an alert rule on Syslog events with a particular severity.**- The following example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0. + ```kusto Syslog | where Facility == "auth" | where SeverityLevel == "err"- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m) + | summarize NumberOfEvents = count() by Computer, bin(TimeGenerated, 15m) ``` ### Custom performance counters |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | General|[Azure Monitor cost and usage](cost-usage.md)|Added section detailing bi Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|A caution has been added about using community libraries with additional information on how to request we include them in our distro.| Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|Support and feedback options are now available across all of our OpenTelemetry pages.| Application-Insights|[How many Application Insights resources should I deploy?](app/create-workspace-resource.md#how-many-application-insights-resources-should-i-deploy)|We added an important warning about additional network costs when monitoring across regions.|-Application-Insights|[Use Search in Application Insights](app/search-and-transaction-diagnostics.md?tabs=transaction-search)|We clarified that URL query strings are not logged by Azure Functions and that URL query strings won't show up in searches.| +Application-Insights|[Use Search in Application Insights](app/transaction-search-and-diagnostics.md?tabs=transaction-search)|We clarified that URL query strings are not logged by Azure Functions and that URL query strings won't show up in searches.| Application-Insights|[Migrating from OpenCensus Python SDK and Azure Monitor OpenCensus exporter for Python to Azure Monitor OpenTelemetry Python Distro](app/opentelemetry-python-opencensus-migrate.md)|Migrate from OpenCensus to OpenTelemetry with this step-by-step guidance.| Application-Insights|[Application Insights overview](app/app-insights-overview.md)|We've added an illustration to convey how Azure Monitor Application Insights works at a high level.| Containers|[Troubleshoot collection of Prometheus metrics in Azure Monitor](containers/prometheus-metrics-troubleshoot.md)|Added the *Troubleshoot using PowerShell script* section.| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to |[Java Profiler for Azure Monitor Application Insights](./app/java-standalone-profiler.md)|Announced the new Java Profiler at Ignite. Read all about it.| |[Release notes for Azure Web App extension for Application Insights](./app/web-app-extension-release-notes.md)|Added release notes for 2.8.44 and 2.8.43.| |[Resource Manager template samples for creating Application Insights resources](./app/resource-manager-app-resource.md)|Fixed inaccurate tagging of workspace-based resources as still in preview.|-|[Unified cross-component transaction diagnostics](./app/search-and-transaction-diagnostics.md?tabs=transaction-diagnostics)|Added a FAQ section to help troubleshoot Azure portal errors like "error retrieving data."| +|[Unified cross-component transaction diagnostics](./app/transaction-search-and-diagnostics.md?tabs=transaction-diagnostics)|Added a FAQ section to help troubleshoot Azure portal errors like "error retrieving data."| |[Upgrading from Application Insights Java 2.x SDK](./app/java-standalone-upgrade-from-2x.md)|Added more upgrade guidance. Java 2.x is deprecated.| |[Using Azure Monitor Application Insights with Spring Boot](./app/java-spring-boot.md)|Updated configuration options.| |
azure-netapp-files | Auxiliary Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/auxiliary-groups.md | + + Title: Understand auxiliary/supplemental groups with NFS in Azure NetApp Files +description: Learn about auxiliary/supplemental groups with NFS in Azure NetApp Files. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 11/13/2023++++# Understand auxiliary/supplemental groups with NFS in Azure NetApp Files ++NFS has a specific limitation for the maximum number of auxiliary GIDs (secondary groups) that can be honored in a single NFS request. The maximum for [AUTH_SYS/AUTH_UNIX](http://tools.ietf.org/html/rfc5531) is 16. For AUTH_GSS (Kerberos), the maximum is 32. This is a known protocol limitation of NFS. ++Azure NetApp Files provides the ability to increase the maximum number of auxiliary groups to 1,024. This is performed by avoiding truncation of the group list in the NFS packet by prefetching the requesting userΓÇÖs group from a name service, such as LDAP. ++## How it works ++The options to extend the group limitation work the same way the `-manage-gids` option for other NFS servers works. Rather than dumping the entire list of auxiliary GIDs a user belongs to, the option looks up the GID on the file or folder and returns that value instead. ++The [command reference for `mountd`](http://man.he.net/man8/mountd) notes: ++```bash +-g or --manage-gids ++Accept requests from the kernel to map user id numbers into lists of group id numbers for use in access control. An NFS request will normally except when using Kerberos or other cryptographic authentication) contains a user-id and a list of group-ids. Due to a limitation in the NFS protocol, at most 16 groups ids can be listed. If you use the -g flag, then the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server. +``` ++When an access request is made, only 16 GIDs are passed in the RPC portion of the packet. +++Any GID beyond the limit of 16 is dropped by the protocol. Extended GIDs in Azure NetApp Files can only be used with external name services such as LDAP. ++## Potential performance impacts ++Extended groups have a minimal performance penalty, generally in the low single digit percentages. Higher metadata NFS workloads would likely have more effect, particularly on the systemΓÇÖs caches. Performance can also be affected by the speed and workload of the name service servers. Overloaded name service servers are slower to respond, causing delays in prefetching the GID. For best results, use multiple name service servers to handle large numbers of requests. ++## ΓÇ£Allow local users with LDAPΓÇ¥ option ++When a user attempts to access an Azure NetApp Files volume via NFS, the request comes in a numeric ID. By default, Azure NetApp Files supports extended group memberships for NFS users (to go beyond the standard 16 group limit to 1,024). As a result, Azure NetApp files attempts to look up the numeric ID in LDAP in an attempt to resolve the group memberships for the user rather than passing the group memberships in an RPC packet. ++Due to that behavior, if that numeric ID can't be resolved to a user in LDAP, the lookup fails and access is denied, even if the requesting user has permission to access the volume or data structure. ++The [Allow local NFS users with LDAP option](configure-ldap-extended-groups.md) in Active Directory connections is intended to disable those LDAP lookups for NFS requests by disabling the extended group functionality. It doesn't provide ΓÇ£local user creation/managementΓÇ¥ within Azure NetApp Files. ++For more information about the option, including how it behaves with different volume security styles in Azure NetApp files, see [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md). ++## Next steps ++* [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md) +* [Allow local NFS users with LDAP option](configure-ldap-extended-groups.md) |
azure-netapp-files | Azure Netapp Files Configure Export Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md | Last updated 07/28/2021 # Configure export policy for NFS or dual-protocol volumes -You can configure export policy to control access to an Azure NetApp Files volume that uses the NFS protocol (NFSv3 and NFSv4.1) or the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB). +You can configure export policy to control access to an Azure NetApp Files volume that uses the NFS protocol (NFSv3 and NFSv4.1) or the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB). You can create up to five export policy rules. +Once created, you can modify details of the export policy rule. The modifiable fields are: ++- IP address (For example, x.x.x.x) +- CIDR range (A subnet range; for example, 0.0.0.0/0) +- IP address comma separated list (For example, x.x.x.x, y.y.y.y) +- Access level +- [Export policy rule order](network-attached-storage-permissions.md#export-policy-rule-ordering) ++Before modifying policy rules with NFS Kerberos enabled, see [Export policy rules with NFS Kerberos enabled](network-attached-storage-permissions.md#export-policy-rule-ordering). + ## Configure the policy 1. On the **Volumes** page, select the volume for which you want to configure the export policy, and then select **Export policy**. You can also configure the export policy during the creation of the volume. You can create up to five export policy rules. ![Screenshot that shows the change ownership mode option.](../media/azure-netapp-files/chown-mode-export-policy.png) ## Next steps +* [Understand NAS permissions in Azure NetApp Files](network-attached-storage-permissions.md) * [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md) * [Manage snapshots](azure-netapp-files-manage-snapshots.md) |
azure-netapp-files | Azure Netapp Files Manage Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md | Azure NetApp Files supports creating on-demand [snapshots](snapshots-introductio ## Steps -1. Go to the volume that you want to create a snapshot for. Click **Snapshots**. +1. Go to the volume that you want to create a snapshot for. Select **Snapshots**. ![Screenshot that shows how to navigate to the snapshots blade.](../media/azure-netapp-files/azure-netapp-files-navigate-to-snapshots.png) -2. Click **+ Add snapshot** to create an on-demand snapshot for a volume. +2. Select **+ Add snapshot** to create an on-demand snapshot for a volume. ![Screenshot that shows how to add a snapshot.](../media/azure-netapp-files/azure-netapp-files-add-snapshot.png) Azure NetApp Files supports creating on-demand [snapshots](snapshots-introductio ![Screenshot that shows the New Snapshot window.](../media/azure-netapp-files/azure-netapp-files-new-snapshot.png) -4. Click **OK**. +4. Select **OK**. ## Next steps |
azure-netapp-files | Configure Access Control Lists | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-access-control-lists.md | Azure NetApp Files supports access control lists (ACLs) on NFSv4.1 volumes. ACLs ACLs contain access control entities (ACEs), which specify the permissions (read, write, etc.) of individual users or groups. When assigning user roles, provide the user email address if you're using a Linux VM joined to an Active Directory Domain. Otherwise, provide user IDs to set permissions. +To learn more about ACLs in Azure NetApp Files, see [Understand NFSv4.x ACLs](nfs-access-control-lists.md). + ## Requirements - ACLs can only be configured on NFS4.1 volumes. You can [convert a volume from NFSv3 to NFSv4.1](convert-nfsv3-nfsv41.md). ACLs contain access control entities (ACEs), which specify the permissions (read ## Next steps * [Configure NFS clients](configure-nfs-clients.md)+* [Understand NFSv4.x ACLs](nfs-access-control-lists.md). |
azure-netapp-files | Configure Network Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md | This section shows you how to set the network features option when you create a ## Edit network features option for existing volumes -You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same NIC for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible. +You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same network interface card (NIC) for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible. ++>[!IMPORTANT] +>It's not recommended that you use the edit network features option with Terraform-managed volumes due to risks. You must follow separate instructions if you use Terraform-managed volumes. For more information see, [Update Terraform-managed Azure NetApp Files volume from Basic to Standard](#update-terraform-managed-azure-netapp-files-volume-from-basic-to-standard). You can also revert the option from *Standard* back to *Basic* network features, but considerations apply and require careful planning. For example, you might need to change configurations for Network Security Groups (NSGs), user-defined routes (UDRs), and IP limits if you revert. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#constraints) for constraints and supported network topologies about Standard and Basic network features. This feature currently doesn't support SDK. > [!IMPORTANT] > Updating the network features option might cause a network disruption on the volumes for up to 5 minutes. -1. Navigate to the volume that you want to change the network features option. +1. Navigate to the volume for which you want to change the network features option. 1. Select **Change network features**. 1. The **Edit network features** window displays the volumes that are in the same network sibling set. Confirm whether you want to modify the network features option. :::image type="content" source="../media/azure-netapp-files/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="../media/azure-netapp-files/edit-network-features.png"::: +### Update Terraform-managed Azure NetApp Files volume from Basic to Standard ++If your Azure NetApp Files volume is managed using Terraform, editing the network features requires additional steps. Terraform-managed Azure resources store their state in a local file, which is in your Terraform module or in Terraform Cloud. ++Updating the network features of your volume alters the underlying network sibling set of the NIC utilized by that volume. This NIC can be utilized by other volumes you own, and other NICs can share the same network sibling set. **If not performed correctly, updating the network features of one Terraform-managed volume can inadvertently update the network features of several other volumes.** ++>[!IMPORTANT] +>A discontinuity between state data and remote Azure resource configurations--notably, in the `network_features` argument--can result in the destruction of one or more volumes and possible data loss upon running `terraform apply`. Carefully follow the workaround outlined here to safely update the network features from Basic to Standard of Terraform-managed volumes. ++>[!NOTE] +>A Terraform module usually consists solely of all top level `*.tf` and/or `*.tf.json` configuration files in a directory, but a Terraform module can make use of module calls to explicitly include other modules into the configuration. You can [learn more about possible module structures](https://developer.hashicorp.com/terraform/language/files). To update all configuration file in your module that reference Azure NetApp Files volumes, be sure to look at all possible sources where your module can reference configuration files. ++The name of the state file in your Terraform module is `terraform.tfstate`. It contains the arguments and their values of all deployed resources in the module. Below is highlighted the `network_features` argument with value ΓÇ£BasicΓÇ¥ for an Azure NetApp Files Volume in a `terraform.tfstate` example file: +++Do _not_ manually update the `terraform.tfstate` file. Likewise, the `network_features` argument in the `*.tf` and `*.tf.json` configuration files should also not be updated until you follow the steps outlined here as this would cause a mismatch in the arguments of the remote volume and the local configuration file representing that remote volume. When Terraform detects a mismatch between the arguments of remote resources and local configuration files representing those remote resources, Terraform can destroy the remote resources and reprovision them with the arguments in the local configuration files. This can cause data loss in a volume. ++By following the steps outlined here, the `network_features` argument in the `terraform.tfstate` file is automatically updated by Terraform to have the value of "Standard" without destroying the remote volume, thus indicating the network features has been successfully updated to Standard. ++>[!NOTE] +> It's recommended to always use the latest Terraform version and the latest version of the `azurerm` Terraform module. ++#### Determine affected volumes ++Changing the network features for an Azure NetApp Files Volume can impact the network features of other Azure NetApp Files Volumes. Volumes in the same network sibling set must have the same network features setting. Therefore, before you change the network features of one volume, you must determine all volumes affected by the change using the Azure portal. ++1. Log in to the Azure portal. +1. Navigate to the volume for which you want to change the network features option. +1. Select the **Change network features**. ***Do **not** select Save.*** +1. Record the paths of the affected volumes then select **Cancel**. +++All Terraform configuration files that define these volumes need to be updated, meaning you need to find the Terraform configuration files that define these volumes. The configuration files representing the affected volumes might not be in the same Terraform module. ++>[!IMPORTANT] +>With the exception of the single volume you know is managed by Terraform, additional affected volumes might not be managed by Terraform. An additional volume that is listed as being in the same network sibling set does not mean that this additional volume is managed by Terraform. ++#### Modify the affected volumesΓÇÖ configuration files ++You must modify the configuration files for each affected volume managed by Terraform that you discovered. Failing to update the configuration file can destroy the volume or result in data loss. ++>[!IMPORTANT] +>Depending on your volumeΓÇÖs lifecycle configuration block settings in your Terraform configuration file, your volume can be destroyed, including possible data loss upon running `terraform apply`. Ensure you know which affected volumes are managed by Terraform and which are not. ++1. Locate the affected Terraform-managed volumes configuration files. +1. Add the `ignore_changes = [network_features]` to the volume's `lifecycle` configuration block. If the `lifecycle` block does not exist in that volumeΓÇÖs configuration, add it. ++ :::image type="content" source="../media/azure-netapp-files/terraform-lifecycle.png" alt-text="Screenshot of the lifecycle configuration." lightbox="../media/azure-netapp-files/terraform-lifecycle.png"::: ++1. Repeat for each affected Terraform-managed volume. + +The `ignore_changes` feature is intended to be used when a resourceΓÇÖs reference to data might change after the resource is created. Adding the `ignore_changes` feature to the `lifecycle` block allows the network features of the volumes to be changed in the Azure portal without Terraform trying to fix this argument of the volume on the next run of `terraform apply`. You can [learn more about the `ignore_changes` feature](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle). ++#### Update the volumes' network features ++1. In the Azure portal, navigate to the Azure NetApp Files volume for which you want to change network features. +1. Select the **Change network features**. +1. In the **Action** field, confirm that it reads **Change to Standard**. ++ :::image type="content" source="../media/azure-netapp-files/change-network-features-standard.png" alt-text="Screenshot of confirm change of network features." lightbox="../media/azure-netapp-files/change-network-features-standard.png"::: ++1. Select **Save**. +1. Wait until you receive a notification that the network features update has completed. In your **Notifications**, the message reads "Successfully updated network features. Network features for network sibling set have successfully updated to ΓÇÿStandardΓÇÖ." +1. In the terminal, run `terraform plan` to view any potential changes. The output should indicate that the infrastructure matches the configuration with a message reading "No changes. Your infrastructure matches the configuration." ++ :::image type="content" source="../media/azure-netapp-files/terraform-plan-output.png" alt-text="Screenshot of terraform plan command output." lightbox="../media/azure-netapp-files/terraform-plan-output.png"::: ++ >[!IMPORTANT] + > As a safety precaution, execute `terraform plan` before executing `terraform apply`. The command `terraform plan` allows you to create a ΓÇ£planΓÇ¥ file, which contains the changes to your remote resources. This plan allows you to know if any of your affected volumes will be destroyed by running `terraform apply`. ++1. Run `terraform apply` to update the `terraform.tfstate` file. ++ Repeat for all modules containing affected volumes. ++ Observe the change in the value of the `network_features` argument in the `terraform.tfstate` files, which changed from "Basic" to "Standard": ++ :::image type="content" source="../media/azure-netapp-files/updated-terraform-module.png" alt-text="Screenshot of updated Terraform module." lightbox="../media/azure-netapp-files/updated-terraform-module.png"::: ++#### Update Terraform-managed Azure NetApp Files volumesΓÇÖ configuration file for configuration parity ++Once you've update the volumes' network features, you must also modify the `network_features` arguments and `lifecycle blocks` in all configuration files of affected Terraform-managed volumes. This update ensures that if you have to recreate or update the volume, it maintains its Standard network features setting. ++1. In the configuration file, set `network_features` to "Standard" and remove the `ignore_changes = [network_features]` line from the `lifecycle` block. ++ :::image type="content" source="../media/azure-netapp-files/terraform-network-features-standard.png" alt-text="Screenshot of Terraform module with Standard network features." lightbox="../media/azure-netapp-files/terraform-network-features-standard.png"::: ++1. Repeat for each affected Terraform-managed volume. +1. Verify that the updated configuration files accurately represent the configuration of the remote resources by running `terraform plan`. Confirm the output reads "No changes." +1. Run `terraform apply` to complete the update. + ## Next steps * [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) |
azure-netapp-files | Manage Default Individual User Group Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md | Quota rules only come into effect on the CRR/CZR destination volume after the re * To provide optimal performance, the space consumption may exceed configured hard limit before the quota is enforced. The additional space consumption won't exceed either the lower of 1 GB or five percent of the configured hard limit.     * After reaching the quota limit, if a user or administrator deletes files or directories to reduce quota usage under the limit, subsequent quota-consuming file operations may resume with a delay of up to five seconds. -## Register the feature --The feature to manage user and group quotas is currently in preview. Before using this feature for the first time, you need to register it. --1. Register the feature: -- ```azurepowershell-interactive - Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota - ``` --2. Check the status of the feature registration: -- ```azurepowershell-interactive - Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota - ``` - > [!NOTE] - > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing. --You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. - ## Create new quota rules 1. From the Azure portal, navigate to the volume for which you want to create a quota rule. Select **User and group quotas** in the navigation pane, then click **Add** to create a quota rule for a volume. |
azure-netapp-files | Manage Smb Share Access Control Lists | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-smb-share-access-control-lists.md | + + Title: Manage SMB share ACLs in Azure NetApp Files +description: Learn how to manage SMB share access control lists in Azure NetApp Files. ++++++ Last updated : 11/03/2023++# Manage SMB share ACLs in Azure NetApp Files ++SMB shares can control access to who can mount and access a share, as well as control access levels to users and groups in an Active Directory domain. The first level of permissions that get evaluated are share access control lists (ACLs). ++There are two ways to view share settings: ++* In the **Advanced permissions** settings ++* With the **Microsoft Management Console (MMC)** ++## Prerequisites ++You must have the mount path. You can retrieve this in the Azure portal by navigating to the **Overview** menu of the volume for which you want to configure share ACLs. Identify the **Mount path**. ++++## View SMB share ACLs with advanced permissions ++Advanced permissions for files, folders, and shares on an Azure NetApp File volume can be accessed by right-clicking the Azure NetApp Files share at the top level of the UNC path (for example, `\\Azure.NetApp.Files\`) or in the Windows Explorer view when navigating to the share itself (for instance, `\\Azure.NetApp.Files\sharename`). ++>[!NOTE] +>You can only view SMB share ACLs in the **Advanced permissions** settings. ++1. In Windows Explorer, use the mount path to open the volume. Right-click on the volume, select **Properties**. Switch to the **Security** tab then select **Advanced**. ++ :::image type="content" source="../media/azure-netapp-files/security-advanced-tab.png" alt-text="Screenshot of security tab." lightbox="../media/azure-netapp-files/security-advanced-tab.png"::: ++1. In the new window that pops up, switch to the **Share** tab to view the share-level ACLs. You cannot modify share-level ACLs. ++ >[!NOTE] + >Azure NetApp Files doesn't support windows audit ACLs. Azure NetApp Files ignores any audit ACL applied to files or directories hosted on Azure NetApp Files volumes. ++ :::image type="content" source="../media/azure-netapp-files/view-permissions.png" alt-text="Screenshot of the permissions tab." lightbox="../media/azure-netapp-files/view-permissions.png"::: ++ :::image type="content" source="../media/azure-netapp-files/view-shares.png" alt-text="Screenshot of the share tab." lightbox="../media/azure-netapp-files/view-shares.png"::: +++## Modify share-levels ACLs with the Microsoft Management Console ++You can only modify the share ACLs in Azure NetApp Files with the Microsoft Management Console (MMC). ++1. To modify share-level ACLs in Azure NetApp Files, open the Computer Management MMC from the Server Manager in Windows. From there, select the **Tools** menu then **Computer Management**. ++1. In the Computer Management window, right-click **Computer management (local)** then select **Connect to another computer**. ++ :::image type="content" source="../media/azure-netapp-files/computer-management-local.png" alt-text="Screenshot of the computer management window." lightbox="../media/azure-netapp-files/computer-management-local.png"::: ++1. In the **Another computer** field, enter the fully qualified domain name (FQDN). ++ The FQDN comes from the mount path you retrieved in the prerequisites. For example, if the mount path is `\\ANF-West-f899.contoso.com\SMBVolume`, enter `ANF-West-f899.contoso.com` as the FQDN. ++1. Once connected, expand **System Tools** then select **Shared Folders > Shares**. +1. To manage share permissions, right-click on the name of the share you want to modify from list and select **Properties**. ++ :::image type="content" source="../media/azure-netapp-files/share-folder.png" alt-text="Screenshot of the share folder." lightbox="../media/azure-netapp-files/share-folder.png"::: ++1. Add, remove, or modify the share ACLs as appropriate. ++ :::image type="content" source="../media/azure-netapp-files/add-share.png" alt-text="Screenshot showing how to add a share." lightbox="../media/azure-netapp-files/add-share.png"::: + +## Next step ++* [Understand NAS permissions in Azure NetApp Files](network-attached-storage-permissions.md) |
azure-netapp-files | Network Attached File Permissions Nfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-nfs.md | + + Title: Understand NFS file permissions in Azure NetApp Files +description: Learn about mode bits in NFS workloads on Azure NetApp Files. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 11/13/2023++++# Understand mode bits in Azure NetApp Files ++File access permissions in NFS limit what users and groups can do once a NAS volume is mounted. Mode bits are a key feature of NFS file permissions in Azure NetApp Files. ++## NFS mode bits ++Mode bit permissions in NFS provide basic permissions for files and folders, using a standard numeric representation of access controls. Mode bits can be used with either NFSv3 or NFSv4.1, but mode bits are the standard option for securing NFSv3 as defined in [RFC-1813](https://tools.ietf.org/html/rfc1813#page-22). The following table shows how those numeric values correspond to access controls. ++| Mode bit numeric | +| | +| 1 ΓÇô execute (x) | +| 2 ΓÇô write (w) | +| 3 ΓÇô write/execute (wx) | +| 4 ΓÇô read (r) | +| 5 ΓÇô read/execute (rx) | +| 6 ΓÇô read/write (rw) | +| 7 ΓÇô read/write/execute (rwx) | ++Numeric values are applied to different segments of an access control: owner, group and everyone else, meaning that there are no granular user access controls in place for basic NFSv3. The following image shows an example of how a mode bit access control might be constructed for use with an NFSv3 object. +++Azure NetApp Files doesn't support POSIX ACLs. Thus granular ACLs are only possible with NFSv3 when using an NTFS security style volume with valid UNIX to Windows name mappings via a name service such as Active Directory LDAP. Alternately, you can use NFSv4.1 with Azure NetApp Files and NFSv4.1 ACLs. ++The following table compares the permission granularity between NFSv3 mode bits and NFSv4.x ACLs. ++| NFSv3 mode bits | NFSv4.x ACLs | +| - | - | +| <ul><li>Set user ID on execution (setuid)</li><li>Set group ID on execution (setgid)</li><li>Save swapped text (sticky bit)</li><li>Read permission for owner</li><li>Write permission for owner</li><li>Execute permission for owner on a file; or look up (search) permission for owner in directory</li><li>Read permission for group</li><li>Write permission for group</li><li>Execute permission for group on a file; or look up (search) permission for group in directory</li><li>Read permission for others</li><li>Write permission for others</li><li>Execute permission for others on a file; or look up (search) permission for others in directory</li></ul> | <ul><li>ACE types (Allow/Deny/Audit)</li><li>Inheritance flags:</li><li>directory-inherit</li><li>file-inherit</li><li>no-propagate-inherit</li><li>inherit-only</li><li>Permissions:</li><li>read-data (files) / list-directory (directories)</li><li>write-data (files) / create-file (directories)</li><li>append-data (files) / create-subdirectory (directories)</li><li>execute (files) / change-directory (directories)</li><li>delete </li><li>delete-child</li><li>read-attributes</li><li>write-attributes</li><li>read-named-attributes</li><li>write-named-attributes</li><li>read-ACL</li><li>write-ACL</li><li>write-owner</li><li>Synchronize</li></ul> | ++For more information, see [Understand NFSv4.x access control lists ACLs](nfs-access-control-lists.md). ++### Sticky bits, setuid, and setgid ++When using mode bits with NFS mounts, the ownership of files and folders is based on the `uid` and `gid` of the user that created the files and folders. Additionally, when a process runs, it runs as the user that kicked it off, and thus, would have the corresponding permissions. With special permissions (such as `setuid`, `setgid`, sticky bit), this behavior can be controlled. ++#### Setuid ++The `setuid` bit is designated by an "s" in the execute portion of the owner bit of a permission. The `setuid` bit allows an executable file to be run as the owner of the file rather than as the user attempting to execute the file. For instance, the `/bin/passwd` application has the `setuid` bit enabled by default, therefore the application runs as root when a user tries to change their password. ++```bash +# ls -la /bin/passwd +-rwsr-xr-x 1 root root 68208 Nov 29 2022 /bin/passwd +``` +If the `setuid` bit is removed, the password change functionality wonΓÇÖt work properly. ++```bash +# ls -la /bin/passwd +-rwxr-xr-x 1 root root 68208 Nov 29 2022 /bin/passwd +user2@parisi-ubuntu:/mnt$ passwd +Changing password for user2. +Current password: +New password: +Retype new password: +passwd: Authentication token manipulation error +passwd: password unchanged +``` ++When the `setuid` bit is restored, the passwd application runs as the owner (root) and works properly, but only for the user running the passwd command. ++```bash +# chmod u+s /bin/passwd +# ls -la /bin/passwd +-rwsr-xr-x 1 root root 68208 Nov 29 2022 /bin/passwd +# su user2 +user2@parisi-ubuntu:/mnt$ passwd user1 +passwd: You may not view or modify password information for user1. +user2@parisi-ubuntu:/mnt$ passwd +Changing password for user2. +Current password: +New password: +Retype new password: +passwd: password updated successfully +``` ++Setuid has no effect on directories. ++#### Setgid ++The `setgid` bit can be used on both files and directories. ++With directories, setgid can be used as a way to inherit the owner group for files and folders created below the parent directory with the bit set. Like `setuid`, the executable bit is changed to an ΓÇ£sΓÇ¥ or an ΓÇ£S.ΓÇ¥ ++>[!NOTE] +>Capital ΓÇ£SΓÇ¥ means that the executable bit hasn't been set, such as if the permissions on the directory are ΓÇ£6ΓÇ¥ or ΓÇ£rw.ΓÇ¥ ++For example: ++```bash +# chmod g+s testdir +# ls -la | grep testdir +drwxrwSrw- 2 user1 group1 4096 Oct 11 16:34 testdir +# who +root ttyS0 2023-10-11 16:28 +# touch testdir/file +# ls -la testdir +total 8 +drwxrwSrw- 2 user1 group1 4096 Oct 11 17:09 . +drwxrwxrwx 5 root root 4096 Oct 11 16:37 .. +-rw-r--r-- 1 root group1 0 Oct 11 17:09 file +``` ++For files, setgid behaves similarly to `setuid`ΓÇöexecutables run using the group permissions of the group owner. If a user is in the owner group, said user has access to run the executable when setgid is set. If they aren't in the group, they don't get access. For instance, if an administrator wants to limit which users could run the `mkdir` command on a client, they can use setgid. ++Normally, `/bin/mkdir` has 755 permissions with root ownership. This means anyone can run `mkdir` on a client. ++```bash +# ls -la /bin/mkdir +-rwxr-xr-x 1 root root 88408 Sep 5 2019 /bin/mkdir +``` ++To modify the behavior to limit which users can run the `mkdir` command, change the group that owns the `mkdir` application, change the permissions for `/bin/mkdir` to 750, and then add the setgid bit to `mkdir`. ++```bash +# chgrp group1 /bin/mkdir +# chmod g+s /bin/mkdir +# chmod 750 /bin/mkdir +# ls -la /bin/mkdir +-rwxr-s 1 root group1 88408 Sep 5 2019 /bin/mkdir +``` +As a result, the application runs with permissions for `group1`. If the user isn't a member of `group1`, the user doesn't get access to run `mkdir`. ++`User1` is a member of `group1`, but `user2` isn't: ++```bash +# id user1 +uid=1001(user1) gid=1001(group1) groups=1001(group1) +# id user2 +uid=1002(user2) gid=2002(group2) groups=2002(group2) +``` +After this change, `user1` can run `mkdir`, but `user2` can't since `user2` isn't in `group1`. ++```bash +# su user1 +$ mkdir test +$ ls -la | grep test +drwxr-xr-x 2 user1 group1 4096 Oct 11 18:48 test ++# su user2 +$ mkdir user2-test +bash: /usr/bin/mkdir: Permission denied +``` +#### Sticky bit ++The sticky bit is used for directories only and, when used, controls which files can be modified in that directory regardless of their mode bit permissions. When a sticky bit is set, only file owners (and root) can modify files, even if file permissions are shown as ΓÇ£777.ΓÇ¥ ++In the following example, the directory ΓÇ£stickyΓÇ¥ lives in an Azure NetApp Fils volume and has wide open permissions, but the sticky bit is set. ++```bash +# mkdir sticky +# chmod 777 sticky +# chmod o+t sticky +# ls -la | grep sticky +drwxrwxrwt 2 root root 4096 Oct 11 19:24 sticky +``` ++Inside the folder are files owned by different users. All have 777 permissions. ++```bash +# ls -la +total 8 +drwxrwxrwt 2 root root 4096 Oct 11 19:29 . +drwxrwxrwx 8 root root 4096 Oct 11 19:24 .. +-rwxr-xr-x 1 user2 group1 0 Oct 11 19:29 4913 +-rwxrwxrwx 1 UNIXuser group1 40 Oct 11 19:28 UNIX-file +-rwxrwxrwx 1 user1 group1 33 Oct 11 19:27 user1-file +-rwxrwxrwx 1 user2 group1 34 Oct 11 19:27 user2-file +``` ++Normally, anyone would be able to modify or delete these files. But because the parent folder has a sticky bit set, only the file owners can make changes to the files. ++For instance, user1 can't modify nor delete `user2-file`: ++```bash +# su user1 +$ vi user2-file +Only user2 can modify this file. +Hi +~ +"user2-file" +"user2-file" E212: Can't open file for writing +$ rm user2-file +rm: can't remove 'user2-file': Operation not permitted +``` ++Conversely, `user2` can't modify nor delete `user1-file` since they don't own the file and the sticky bit is set on the parent directory. ++```bash +# su user2 +$ vi user1-file +Only user1 can modify this file. +Hi +~ +"user1-file" +"user1-file" E212: Can't open file for writing +$ rm user1-file +rm: can't remove 'user1-file': Operation not permitted +``` ++Root, however, still can remove the files. ++```bash +# rm UNIX-file +``` ++To change the ability of root to modify files, you must squash root to a different user by way of an Azure NetApp Files export policy rule. For more information, see [root squashing](network-attached-storage-permissions.md#root-squashing). ++### Umask ++In NFS operations, permissions can be controlled through mode bits, which leverage numerical attributes to determine file and folder access. These mode bits determine read, write, execute, and special attributes. Numerically, permissions are represented as: ++* Execute = 1 +* Read = 2 +* Write = 4 ++Total permissions are determined by adding or subtracting a combination of the preceding. For example: ++* 4 + 2 + 1 = 7 (can do everything) +* 4 + 2 = 6 (read/write) ++For more information, see [UNIX Permissions Help](http://www.zzee.com/solutions/unix-permissions.shtml). ++Umask is a functionality that allows an administrator to restrict the level of permissions allowed to a client. By default, the umask for most clients is set to 0022. 0022 means that files created from that client are assigned that umask. The umask is subtracted from the base permissions of the object. If a volume has 0777 permissions and is mounted using NFS to a client with a umask of 0022, objects written from the client to that volume have 0755 access (0777 ΓÇô 0022). ++```bash +# umask +0022 +# umask -S +u=rwx,g=rx,o=rx +``` +However, many operating systems don't allow files to be created with execute permissions, but they do allow folders to have the correct permissions. Thus, files created with a umask of 0022 might end up with permissions of 0644. The following example uses RHEL 6.5: ++```bash +# umask +0022 +# cd /cdot +# mkdir umask_dir +# ls -la | grep umask_dir +drwxr-xr-x. 2 root root 4096 Apr 23 14:39 umask_dir ++# touch umask_file +# ls -la | grep umask_file +-rw-r--r--. 1 root root 0 Apr 23 14:39 umask_file +``` ++## Next steps ++* [Understand auxiliary/supplemental groups with NFS](auxiliary-groups.md) +* [Understand NFSv4.x access control lists](nfs-access-control-lists.md) |
azure-netapp-files | Network Attached File Permissions Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-smb.md | + + Title: Understand SMB file permissions in Azure NetApp Files +description: Learn about SMB file permissions options in Azure NetApp Files. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 11/13/2023++++# Understand SMB file permissions in Azure NetApp Files ++SMB volumes in Azure NetApp Files can leverage NTFS security styles to make use of NTFS access control lists (ACLs) for access controls. ++NTFS ACLs provide granular permissions and ownership for files and folders by way of access control entries (ACEs). Directory permissions can also be set to enable or disable inheritance of permissions. +++For a complete overview of NTFS-style ACLs, see [Microsoft Access Control overview](/windows/security/identity-protection/access-control/access-control). ++## Next steps ++* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) |
azure-netapp-files | Network Attached File Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions.md | + + Title: Understand NAS file permissions in Azure NetApp Files +description: Learn about NAS file permissions options in Azure NetApp Files. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 11/13/2023++++# Understand NAS file permissions in Azure NetApp Files ++To control access to specific files and folders in a file system, permissions can be applied. File and folder permissions are more granular than share permissions. The following table shows the differences in permission attributes that file and share permissions can apply. ++| SMB share permission | NFS export policy rule permissions | SMB file permission attributes | NFS file permission attributes | +| | | | | +| <ul><li>Read</li><li>Change</li><li>Full control</li></ul> | <ul><li>Read</li><li>Write</li><li>Root</li></ul> | <ul><li>Full control</li><li>Traverse folder/execute</li><li>Read data/list folders</li><li>Read attributes</li><li>Read extended attributes</li><li>Write data/create files</li><li>Append data/create folders</li><li>Write attributes</li><li>Write extended attributes</li><li>Delete subfolders/files</li><li>Delete</li><li>Read permissions</li><li>Change permissions</li><li>Take ownership</li></ul> | **NFSv3** <br /> <ul><li>Read</li><li>Write</li><li>Execute</li></ul> <br /> **NFSv4.1** <br /> <ul><li>Read data/list files and folders</li><li>Write data/create files and folders</li><li>Append data/create subdirectories</li><li>Execute files/traverse directories</li><li>Delete files/directories</li><li>Delete subdirectories (directories only)</li><li>Read attributes (GETATTR)</li><li>Write attributes (SETATTR/chmod)</li><li>Read named attributes</li><li>Write named attributes</li><li>Read ACLs</li><li>Write ACLs</li><li>Write owner (chown)</li><li>Synchronize I/O</li></ul> | ++File and folder permissions can overrule share permissions, as the most restrictive permissions countermand less restrictive permissions. ++## Permission inheritance ++Folders can be assigned inheritance flags, which means that parent folder permissions propagate to child objects. This can help simplify permission management on high file count environments. Inheritance can be disabled on specific files or folders as needed. ++* In Windows SMB shares, inheritance is controlled in the advanced permission view. +++* For NFSv3, permission inheritance doesnΓÇÖt work via ACL, but instead can be mimicked using umask and setgid flags. +* With NFSv4.1, permission inheritance can be handled using inheritance flags on ACLs. ++## Next steps ++* [Understand NFS file permissions](network-attached-file-permissions-nfs.md) +* [Understand SMB file permissions](network-attached-file-permissions-smb.md) +* [Understand NAS share permissions in Azure NetApp Files](network-attached-storage-permissions.md) |
azure-netapp-files | Network Attached Storage Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-permissions.md | + + Title: Understand NAS share permissions in Azure NetApp Files +description: Learn about NAS share permissions options in Azure NetApp Files. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 11/13/2023++++# Understand NAS share permissions in Azure NetApp Files ++Azure NetApp Files provides several ways to secure your NAS data. One aspect of that security is permissions. In NAS, permissions can be broken down into two categories: ++* **Share access permissions** limit who can mount a NAS volume. NFS controls share access permissions via IP address or hostname. SMB controls this via user and group access control lists (ACLs). +* **[File access permissions](network-attached-file-permissions.md)** limit what users and groups can do once a NAS volume is mounted. File access permissions are applied to individual files and folders. ++Azure NetApp Files permissions rely on NAS standards, simplifying the process of security NAS volumes for administrators and end users with familiar methods. ++>[!NOTE] +>If conflicting permissions are listed on share and files, the most restrictive permission is applied. For instance, if a user has read only access at the *share* level and full control at the *file* level, the user receives read access at all levels. ++## Share access permissions ++The initial entry point to be secured in a NAS environment is access to the share itself. In most cases, access should be restricted to only the users and groups that need access to the share. With share access permissions, you can lock down who can even mount the share in the first place. ++Since the most restrictive permissions override other permissions, and a share is the main entry point to the volume (with the fewest access controls), share permissions should abide by a funnel logic, where the share allows more access than the underlying files and folders. The funnel logic enacts more granular, restrictive controls. +++## NFS export policies ++Volumes in Azure NetApp Files are shared out to NFS clients by exporting a path that is accessible to a client or set of clients. Both NFSv3 and NFSv4.x use the same method to limit access to an NFS share in Azure NetApp Files: export policies. ++An export policy is a container for a set of access rules that are listed in order of desired access. These rules control access to NFS shares by using client IP addresses or subnets. If a client isn't listed in an export policy ruleΓÇöeither allowing or explicitly denying accessΓÇöthen that client is unable to mount the NFS export. Since the rules are read in sequential order, if a more restrictive policy rule is applied to a client (for example, by way of a subnet), then it's read and applied first. Subsequent policy rules that allow more access are ignored. This diagram shows a client that has an IP of 10.10.10.10 getting read-only access to a volume because the subnet 0.0.0.0/0 (every client in every subnet) is set to read-only and is listed first in the policy. +++### Export policy rule options available in Azure NetApp Files ++When creating an Azure NetApp Files volume, there are several options configurable for control of access to NFS volumes. ++* **Index**: specifies the order in which an export policy rule is evaluated. If a client falls under multiple rules in the policy, then the first applicable rule applies to the client and subsequent rules are ignored. +* **Allowed clients**: specifies which clients a rule applies to. This value can be a client IP address, a comma-separated list of IP addresses, or a subnet including multiple clients. The hostname and netgroup values aren't supported in Azure NetApp Files. +* **Access**: specifies the level of access allowed to non-root users. For NFS volumes without Kerberos enabled, the options are: Read only, Read & write, or No access. For volumes with Kerberos enabled, the options are: Kerberos 5, Kerberos 5i, or Kerberos 5p. +* **Root access**: specifies how the root user is treated in NFS exports for a given client. If set to "On," the root is root. If set to "Off," the [root is squashed](#root-squashing) to the anonymous user ID 65534. +* **chown mode**: controls what users can run change ownership commands on the export (chown). If set to "Restricted," only the root user can run chown. If set to "Unrestricted," any user with the proper file/folder permissions can run chown commands. ++### Default policy rule in Azure NetApp Files ++When creating a new volume, a default policy rule is created. The default policy prevents a scenario where a volume is created without policy rules, which would restrict access for any client attempting access to the export. If there are no rules, there is no access. ++The default rule has the following values: ++* Index = 1 +* Allowed clients = 0.0.0.0/0 (all clients allowed access) +* Access = Read & write +* Root access = On +* Chown mode = Restricted ++These values can be changed at volume creation or after the volume has been created. ++### Export policy rules with NFS Kerberos enabled in Azure NetApp Files ++[NFS Kerberos](configure-kerberos-encryption.md) can be enabled only on volumes using NFSv4.1 in Azure NetApp Files. Kerberos provides added security by offering different modes of encryption for NFS mounts, depending on the Kerberos type in use. ++When Kerberos is enabled, the values for the export policy rules change to allow specification of which Kerberos mode should be allowed. Multiple Kerberos security modes can be enabled in the same rule if you need access to more than one. ++Those security modes include: ++* **Kerberos 5**: Only initial authentication is encrypted. +* **Kerberos 5i**: User authentication plus integrity checking. +* **Kerberos 5p**: User authentication, integrity checking and privacy. All packets are encrypted. ++Only Kerberos-enabled clients are able to access volumes with export rules specifying Kerberos; no `AUTH_SYS` access is allowed when Kerberos is enabled. ++### Root squashing ++There are some scenarios where you want to restrict root access to an Azure NetApp Files volume. Since root has unfettered access to anything in an NFS volume ΓÇô even when explicitly denying access to root using mode bits or ACLsΓÇöthe only way to limit root access is to tell the NFS server that root from a specific client is no longer root. ++In export policy rules, select "Root access: off" to squash root to a non-root, anonymous user ID of 65534. This means that the root on the specified clients is now user ID 65534 (typically `nfsnobody` on NFS clients) and has access to files and folders based on the ACLs/mode bits specified for that user. For mode bits, the access permissions generally fall under the ΓÇ£EveryoneΓÇ¥ access rights. Additionally, files written as ΓÇ£rootΓÇ¥ from clients impacted by root squash rules create files and folders as the `nfsnobody:65534` user. If you require root to be root, set "Root access" to "On." ++To learn more about managing export policies, see [Configure export policies for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md). ++#### Export policy rule ordering ++The order of export policy rules determines how they are applied. The first rule in the list that applies to an NFS client is the rule used for that client. When using CIDR ranges/subnets for export policy rules, an NFS client in that range may receive unwanted access due to the range in which it's included. ++Consider the following example: +++- The first rule in the index includes *all clients* in *all subnets* by way of the default policy rule using 0.0.0.0/0 as the **Allowed clients** entry. That rule allows ΓÇ£Read & WriteΓÇ¥ access to all clients for that Azure NetApp Files NFSv3 volume. +- The second rule in the index explicitly lists NFS client 10.10.10.10 and is configured to limit access to ΓÇ£Read only,ΓÇ¥ with no root access (root is squashed). ++As it stands, the client 10.10.10.10 receives access due to the first rule in the list. The next rule is never be evaluated for access restrictions, thus 10.10.10.10 get Read & Write access even though ΓÇ£Read onlyΓÇ¥ is desired. Root is also root, rather than [being squashed](#root-squashing). ++To fix this and set access to the desired level, the rules can be re-ordered to place the desired client access rule above any subnet/CIDR rules. You can reorder export policy rules in the Azure portal by dragging the rules or using the **Move** commands in the `...` menu in the row for each export policy rule. ++>[!NOTE] +>You can use the [Azure NetApp Files CLI or REST API](azure-netapp-files-sdk-cli.md) only to add or remove export policy rules. ++## SMB shares ++SMB shares enable end users can access SMB or dual-protocol volumes in Azure NetApp Files. Access controls for SMB shares are limited in the Azure NetApp Files control plane to only SMB security options such as access-based enumeration and non-browsable share functionality. These security options are configured during volume creation with the **Edit volume** functionality. +++Share-level permission ACLs are managed through a Windows MMC console rather than through Azure NetApp Files. ++### Security-related share properties ++Azure NetApp Files offers multiple share properties to enhance security for administrators. ++#### Access-based enumeration ++[Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) is an Azure NetApp Files SMB volume feature that limits enumeration of files and folders (that is, listing the contents) in SMB only to users with allowed access on the share. For instance, if a user doesn't have access to read a file or folder in a share with access-based enumeration enabled, then the file or folder doesn't show up in directory listings. In the following example, a user (`smbuser`) doesn't have access to read a folder named ΓÇ£ABEΓÇ¥ in an Azure NetApp Files SMB volume. Only `contosoadmin` has access. +++In the below example, access-based enumeration is disabled, so the user has access to the `ABE` directory of `SMBVolume`. +++In the next example, access-based enumeration is enabled, so the `ABE` directory of `SMBVolume` doesn't display for the user. +++The permissions also extend to individual files. In the below example, access-based enumeration is disabled and `ABE-file` displays to the user. +++With access-based enumeration enabled, `ABE-file` doesn't display to the user. +++#### Non-browsable shares ++The non-browsable shares feature in Azure NetApp Files limits clients from browsing for an SMB share by hiding the share from view in Windows Explorer or when listing shares in "net view." Only end users that know the absolute paths to the share are able to find the share. ++In the following image, the non-browsable share property isn't enabled for `SMBVolume`, so the volume displays in the listing of the file server (using `\\servername`). +++With non-browsable shares enabled on `SMBVolume` in Azure NetApp Files, the same view of the file server excludes `SMBVolume`. ++In the next image, the share `SMBVolume` has non-browsable shares enabled in Azure NetApp Files. When that is enabled, this is the view of the top level of the file server. +++Even though the volume in the listing cannot be seen, it remains accessible if the user knows the file path. +++#### SMB3 encryption ++SMB3 encryption is an Azure NetApp Files SMB volume feature that enforces encryption over the wire for SMB clients for greater security in NAS environments. The following image shows a screen capture of network traffic when SMB encryption is disabled. Sensitive informationΓÇösuch as file names and file handlesΓÇöis visible. +++When SMB Encryption is enabled, the packets are marked as encrypted, and no sensitive information can be seen. Instead, itΓÇÖs shown as "Encrypted SMB3 data." +++#### SMB share ACLs ++SMB shares can control access to who can mount and access a share, as well as control access levels to users and groups in an Active Directory domain. The first level of permissions that get evaluated are share access control lists (ACLs). ++SMB share permissions are more basic than file permissions: they only apply read, change or full control. Share permissions can be overridden by file permissions and file permissions can be overridden by share permissions; the most restrictive permission is the one abided by. For instance, if the group ΓÇ£EveryoneΓÇ¥ is given full control on the share (the default behavior), and specific users have read-only access to a folder via a file-level ACL, then read access is applied to those users. Any other users not listed explicitly in the ACL have full control ++Conversely, if the share permission is set to ΓÇ£ReadΓÇ¥ for a specific user, but the file-level permission is set to full control for that user, ΓÇ£ReadΓÇ¥ access is enforced. ++In dual-protocol NAS environments, SMB share ACLs only apply to SMB users. NFS clients leverage export policies and rules for share access rules. As such, controlling permissions at the file and folder level is preferred over share-level ACLs, especially for dual=protocol NAS volumes. ++To learn how to configure ACLs, see [Manage SMB share ACLs in Azure NetApp Files](manage-smb-share-access-control-lists.md). ++## Next steps ++* [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md) +* [Understand NAS](network-attached-storage-concept.md) +* [Understand NAS permissions](network-attached-storage-permissions.md) +* [Manage SMB share ACLs in Azure NetApp Files](manage-smb-share-access-control-lists.md) |
azure-netapp-files | Nfs Access Control Lists | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md | + + Title: Understand NFSv4.x access control lists in Azure NetApp Files +description: Learn about using NFSv4.x access control lists in Azure NetApp Files. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 11/13/2023++++# Understand NFSv4.x access control lists in Azure NetApp Files ++The NFSv4.x protocol can provide access control in the form of [access control lists (ACLs)](/windows/win32/secauthz/access-control-lists), which conceptually similar to ACLs used in [SMB via Windows NTFS permissions](network-attached-file-permissions-smb.md). An NFSv4.x ACL consists of individual [Access Control Entries (ACEs)](/windows/win32/secauthz/access-control-entries), each of which provides an access control directive to the server. +++Each NFSv4.x ACL is created with the format of `type:flags:principal:permissions`. ++* **Type** ΓÇô the type of ACL being defined. Valid choices include Access (A), Deny (D), Audit (U), Alarm (L). Azure NetApp Files supports Access, Deny and Audit ACL types, but Audit ACLs, while being able to be set, don't currently produce audit logs. +* **Flags** ΓÇô adds extra context for an ACL. There are three kinds of ACE flags: group, inheritance, and administrative. For more information on flags, see [NFSv4.x ACE flags](#nfsv4x-ace-flags). +* **Principal** ΓÇô defines the user or group that is being assigned the ACL. A principal on an NFSv4.x ACL uses the format of name@ID-DOMAIN-STRING.COM. For more detailed information on principals, see [NFSv4.x user and group principals](#nfsv4x-user-and-group-principals). +* **Permissions** ΓÇô where the access level for the principal is defined. Each permission is designated a single letter (for instance, read gets ΓÇ£rΓÇ¥, write gets ΓÇ£wΓÇ¥ and so on). Full access would incorporate each available permission letter. For more information, see [NFSv4.x permissions](#nfsv4x-permissions). ++`A:g:group1@contoso.com:rwatTnNcCy` is an example of a valid ACL, following the `type:flags:principal:permissions` format. The example ACL grants full access to the group `group1` in the contoso.com ID domain. ++## NFSv4.x ACE flags ++An ACE flag helps provide more information about an ACE in an ACL. For instance, if a group ACE is added to an ACL, a group flag needs to be used to designate the principal is a group and not a user. It's possible in Linux environments to have a user and a group name that are identical, so the flag ensures an ACE is honored, then the NFS server needs to know what type of principal is being defined. ++Other flags can be used to control ACEs, such as inheritance and administrative flags. ++### Access and deny flags ++Access (A) and deny (D) flags are used to control security ACE types. An access ACE controls the level of access permissions on a file or folder for a principal. A deny ACE explicitly prohibits a principal from accessing a file or folder, even if an access ACE is set that would allow that principal to access the object. Deny ACEs always overrule access ACEs. In general, avoid using deny ACEs, as NFSv4.x ACLs follow a ΓÇ£default denyΓÇ¥ model, meaning if an ACL isn't added, then deny is implicit. Deny ACEs can create unnecessary complications in ACL management. ++### Inheritance flags ++Inheritance flags control how ACLs behave on files created below a parent directory with the inheritance flag set. When an inheritance flag is set, files and/or directories inherit the ACLs from the parent folder. Inheritance flags can only be applied to directories, so when a subdirectory is created, it inherits the flag. Files created below a parent directory with an inheritance flag inherit ACLs, but not the inheritance flags. ++The following table describes available inheritance flags and their behaviors. ++| Inheritance flag | Behavior | +| - | | +| d | - Directories below the parent directory inherit the ACL <br> - Inheritance flag is also inherited | +| f | - Files below the parent directory inherit the ACL <br> - Files don't set inheritance flag | +| i | Inherit-only; ACL doesnΓÇÖt apply to the current directory but must apply inheritance to objects below the directory | +| n | - No propagation of inheritance <br> After the ACL is inherited, the inherit flags are cleared on the objects below the parent | ++### NFSv4.x ACL examples ++In the following example, there are three different ACEs with distinct inheritance flags: +* directory inherit only (di) +* file inherit only (fi) +* both file and directory inherit (fdi) ++```bash +# nfs4_getfacl acl-dir ++# file: acl-dir/ +A:di:user1@CONTOSO.COM:rwaDxtTnNcCy +A:fdi:user2@CONTOSO.COM:rwaDxtTnNcCy +A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy +``` ++`User1` has a directory inherit ACL only. On a subdirectory created below the parent, the ACL is inherited, but on a file below the parent, it isn't. ++```bash +# nfs4_getfacl acl-dir/inherit-dir ++# file: acl-dir/inherit-dir +A:d:user1@CONTOSO.COM:rwaDxtTnNcCy +A:fd:user2@CONTOSO.COM:rwaDxtTnNcCy +A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy ++# nfs4_getfacl acl-dir/inherit-file ++# file: acl-dir/inherit-file + << ACL missing +A::user2@CONTOSO.COM:rwaxtTnNcCy +A::user3@CONTOSO.COM:rwaxtTnNcCy +A::OWNER@:rwatTnNcCy +A:g:GROUP@:rtncy +A::EVERYONE@:rtncy +``` ++`User2` has a file and directory inherit flag. As a result, both files and directories below a directory with that ACE entry inherit the ACL, but files wonΓÇÖt inherit the flag. ++```bash +# nfs4_getfacl acl-dir/inherit-dir ++# file: acl-dir/inherit-dir +A:d:user1@CONTOSO.COM:rwaDxtTnNcCy +A:fd:user2@CONTOSO.COM:rwaDxtTnNcCy +A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy ++# nfs4_getfacl acl-dir/inherit-file ++# file: acl-dir/inherit-file +A::user2@CONTOSO.COM:rwaxtTnNcCy << no flag +A::user3@CONTOSO.COM:rwaxtTnNcCy +A::OWNER@:rwatTnNcCy +A:g:GROUP@:rtncy +A::EVERYONE@:rtncy +``` ++`User3` only has a file inherit flag. As a result, only files below the directory with that ACE entry inherit the ACL, but they don't inherit the flag since it can only be applied to directory ACEs. ++```bash +# nfs4_getfacl acl-dir/inherit-dir ++# file: acl-dir/inherit-dir +A:d:user1@CONTOSO.COM:rwaDxtTnNcCy +A:fd:user2@CONTOSO.COM:rwaDxtTnNcCy +A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy ++# nfs4_getfacl acl-dir/inherit-file ++# file: acl-dir/inherit-file +A::user2@CONTOSO.COM:rwaxtTnNcCy +A::user3@CONTOSO.COM:rwaxtTnNcCy << no flag +A::OWNER@:rwatTnNcCy +A:g:GROUP@:rtncy +A::EVERYONE@:rtncy +``` ++When a "no-propogate" (n) flag is set on an ACL, the flags clear on subsequent directory creations below the parent. In the following example, `user2` has the `n` flag set. As a result, the subdirectory clears the inherit flags for that principal and objects created below that subdirectory donΓÇÖt inherit the ACE from `user2`. ++```bash +# nfs4_getfacl /mnt/acl-dir ++# file: /mnt/acl-dir +A:di:user1@CONTOSO.COM:rwaDxtTnNcCy +A:fdn:user2@CONTOSO.COM:rwaDxtTnNcCy +A:fd:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy ++# nfs4_getfacl inherit-dir/ ++# file: inherit-dir/ +A:d:user1@CONTOSO.COM:rwaDxtTnNcCy +A::user2@CONTOSO.COM:rwaDxtTnNcCy << flag cleared +A:fd:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy ++# mkdir subdir +# nfs4_getfacl subdir ++# file: subdir +A:d:user1@CONTOSO.COM:rwaDxtTnNcCy +<< ACL not inherited +A:fd:user3@CONTOSO.COM:rwaDxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rxtncy +A::EVERYONE@:rxtncy +``` ++Inherit flags are a way to more easily manage your NFSv4.x ACLs, sparing you from explicitly setting an ACL each time you need one. ++### Administrative flags ++Administrative flags in NFSv4.x ACLs are special flags that are used only with Audit and Alarm ACL types. These flags define either success or failure access attempts for actions to be performed. For instance, if it's desired to audit failed access attempts to a specific file, then an administrative flag of ΓÇ£FΓÇ¥ can be used to control that behavior. ++This Audit ACL is an example of that, where `user1` is audited for failed access attempts for any permission level: `U:F:user1@contoso.com:rwatTnNcCy`. ++Azure NetApp Files only supports setting administrative flags for Audit ACEs. File access logging isn't currently supported. Alarm ACEs aren't supported in Azure NetApp Files. ++## NFSv4.x user and group principals ++With NFSv4.x ACLs, user and group principals define the specific objects that an ACE should apply to. Principals generally follow a format of name@ID-DOMAIN-STRING.COM. The ΓÇ£nameΓÇ¥ portion of a principal can be a user or group, but that user or group must be resolvable in Azure NetApp Files via the LDAP server connection when specifying the NFSv4.x ID domain. If the name@domain isn't resolvable by Azure NetApp Files, then the ACL operation fails with an ΓÇ£invalid argumentΓÇ¥ error. ++```bash +# nfs4_setfacl -a A::noexist@CONTOSO.COM:rwaxtTnNcCy inherit-file +Failed setxattr operation: Invalid argument +``` ++You can check within Azure NetApp Files if a name can be resolved using the LDAP group ID list. Navigate to **Support + Troubleshooting** then **LDAP Group ID list**. ++### Local user and group access via NFSv4.x ACLs ++Local users and groups can also be used on an NFSv4.x ACL if only the numeric ID is specified in the ACL. User names or numeric IDs with a domain ID specified fail. ++For instance: ++```bash +# nfs4_setfacl -a A:fdg:3003:rwaxtTnNcCy NFSACL +# nfs4_getfacl NFSACL/ +A:fdg:3003:rwaxtTnNcCy +A::OWNER@:rwaDxtTnNcCy +A:g:GROUP@:rwaDxtTnNcy +A::EVERYONE@:rwaDxtTnNcy ++# nfs4_setfacl -a A:fdg:3003@CONTOSO.COM:rwaxtTnNcCy NFSACL +Failed setxattr operation: Invalid argument ++# nfs4_setfacl -a A:fdg:users:rwaxtTnNcCy NFSACL +Failed setxattr operation: Invalid argument +``` ++When a local user or group ACL is set, any user or group that corresponds to the numeric ID on the ACL receives access to the object. For local group ACLs, a user passes its group memberships to Azure NetApp Files. If the numeric ID of the group with access to the file via the userΓÇÖs request is shown to the Azure NetApp Files NFS server, then access is allowed as per the ACL. ++The credentials passed from client to server can be seen via a packet capture as seen below. +++**Caveats:** ++* Using local users and groups for ACLs means that every client accessing the files/folders need to have matching user and group IDs. +* When using a numeric ID for an ACL, Azure NetApp Files implicitly trusts that the incoming request is valid and that the user requesting access is who they say they are and is a member of the groups they claim to be a member of. A user or group numeric can be spoofed if a bad actor knows the numeric ID and can access the network using a client with the ability to create users and groups locally. +* If a user is a member of more than 16 groups, then any group after the sixteenth group (in alphanumeric order) is denied access to the file or folder, unless LDAP and extended group support is used. +* LDAP and full name@domain name strings are highly recommended when using NFSv4.x ACLs for better manageability and security. A centrally managed user and group repository is easier to maintain and harder to spoof, thus making unwanted user access less likely. ++### NFSv4.x ID domain ++The ID domain is an important component of the principal, where an ID domain must match on both client and within Azure NetApp Files for user and group names (specifically, root) to show up properly on file/folder ownerships. ++Azure NetApp Files defaults the NFSv4.x ID domain to the DNS domain settings for the volume. NFS clients also default to the DNS domain for the NFSv4.x ID domain. If the clientΓÇÖs DNS domain is different than the Azure NetApp Files DNS domain, then a mismatch occurs. When listing file permissions with commands such as `ls`, users/groups show up as ΓÇ£nobody". ++When a domain mismatch occurs between the NFS client and Azure NetApp Files, check the client logs for errors similar to: + +```bash +August 19 13:14:29 centos7 nfsidmap[17481]: nss_getpwnam: name 'root@microsoft.com' does not map into domain ΓÇÿCONTOSO.COM' +``` ++The NFS clientΓÇÖs ID domain can be overridden using the /etc/idmapd.conf fileΓÇÖs ΓÇ£DomainΓÇ¥ setting. For example: `Domain = CONTOSO.COM`. ++Azure NetApp Files also allows you to [change the NFSv4.1 ID domain](azure-netapp-files-configure-nfsv41-domain.md). For additional details, see [How-to: NFSv4.1 ID Domain Configuration for Azure NetApp Files](https://www.youtube.com/watch?v=UfaJTYWSVAY). ++## NFSv4.x permissions ++NFSv4.x permissions are the way to control what level of access a specific user or group principal has on a file or folder. Permissions in NFSv3 only allow read/write/execute (rwx) levels of access definition, but NFSv4.x provides a slew of other granular access controls as an improvement over NFSv3 mode bits. ++There are 13 permissions that can be set for users, and 14 permissions that can be set for groups. ++| Permission letter | Permission granted | +| - | - | +|r | Read data/list files and folders | +|w | Write data/create files and folders | +|a | Append data/create subdirectories | +|x | Execute files/traverse directories | +|d | Delete files/directories | +|D | Delete subdirectories (directories only) | +|t | Read attributes (GETATTR) | +|T | Write attributes (SETATTR/chmod) | +|n | Read named attributes | +|N | Write named attributes | +|c | Read ACLs | +|C | Write ACLs | +|o | Write owner (chown) | +|y | Synchronous I/O | ++When access permissions are set, a user or group principal adheres to those assigned rights. ++### NFSv4.x permission examples ++The following examples show how different permissions work with different configuration scenarios. ++**User with read access (r only)** ++With read-only access, a user can read attributes and data, but any write access (data, attributes, owner) is denied. ++```bash +A::user1@CONTOSO.COM:r ++sh-4.2$ ls -la +total 12 +drwxr-xr-x 3 root root 4096 Jul 12 12:41 . +drwxr-xr-x 3 root root 4096 Jul 12 12:09 .. +-rw-r--r-- 1 root root 0 Jul 12 12:41 file +drwxr-xr-x 2 root root 4096 Jul 12 12:31 subdir +sh-4.2$ touch user1-file +touch: can't touch ΓÇÿuser1-fileΓÇÖ: Permission denied +sh-4.2$ chown user1 file +chown: changing ownership of ΓÇÿfileΓÇÖ: Operation not permitted +sh-4.2$ nfs4_setfacl -e /mnt/acl-dir/inherit-dir +Failed setxattr operation: Permission denied +sh-4.2$ rm file +rm: remove write-protected regular empty file ΓÇÿfileΓÇÖ? y +rm: can't remove ΓÇÿfileΓÇÖ: Permission denied +sh-4.2$ cat file +Test text +``` ++**User with read access (r) and write attributes (T)** ++In this example, permissions on the file can be changed due to the write attributes (T) permission, but no files can be created since only read access is allowed. This configuration illustrates the kind of granular controls NFSv4.x ACLs can provide. ++```bash +A::user1@CONTOSO.COM:rT ++sh-4.2$ touch user1-file +touch: can't touch ΓÇÿuser1-fileΓÇÖ: Permission denied +sh-4.2$ ls -la +total 60 +drwxr-xr-x 3 root root 4096 Jul 12 16:23 . +drwxr-xr-x 19 root root 49152 Jul 11 09:56 .. +-rw-r--r-- 1 root root 10 Jul 12 16:22 file +drwxr-xr-x 3 root root 4096 Jul 12 12:41 inherit-dir +-rw-r--r-- 1 user1 group1 0 Jul 12 16:23 user1-file +sh-4.2$ chmod 777 user1-file +sh-4.2$ ls -la +total 60 +drwxr-xr-x 3 root root 4096 Jul 12 16:41 . +drwxr-xr-x 19 root root 49152 Jul 11 09:56 .. +drwxr-xr-x 3 root root 4096 Jul 12 12:41 inherit-dir +-rwxrwxrwx 1 user1 group1 0 Jul 12 16:23 user1-file +sh-4.2$ rm user1-file +rm: can't remove ΓÇÿuser1-fileΓÇÖ: Permission denied +``` ++### Translating mode bits into NFSv4.x ACL permissions ++When a chmod is run an an object with NFSv4.x ACLs assigned, a series of system ACLs are updated with new permissions. For instance, if the permissions are set to 755, then the system ACL files get updated. The following table shows what each numeric value in a mode bit translates to in NFSv4 ACL permissions. ++See [NFSv4.x permissions](#nfsv4x-permissions) for a table outlining all the permissions. ++| Mode bit numeric | Corresponding NFSv4.x permissions | +| -- | -- | +| 1 ΓÇô execute (x) | Execute, read attributes, read ACLs, sync I/O (xtcy) | +| 2 ΓÇô write (w) | Write, append data, read attributes, write attributes, write named attributes, read ACLs, sync I/O (watTNcy) | +| 3 ΓÇô write/execute (wx) | Write, append data, execute, read attributes, write attributes, write named attributes, read ACLs, sync I/O (waxtTNcy) | +| 4 ΓÇô read (r) | Read, read attributes, read named attributes, read ACLs, sync I/O (rtncy) | +| 5 ΓÇô read/execute (rx) | Read, execute, read attributes, read named attributes, read ACLs, sync I/O (rxtncy) | +| 6 ΓÇô read/write (rw) | Read, write, append data, read attributes, write attributes, read named attributes, write named attributes, read ACLs, sync I/O (rwatTnNcy) | +| 7 ΓÇô read/write/execute (rwx) | Full control/all permissions | ++## How NFSv4.x ACLs work with Azure NetApp Files ++Azure NetApp Files supports NFSv4.x ACLs natively when a volume has NFSv4.1 enabled for access. There isn't anything to enable on the volume for ACL support, but for NFSv4.1 ACLs to work best, an LDAP server with UNIX users and groups is needed to ensure that Azure NetApp Files is able to resolve the principals set on the ACLs securely. Local users can be used with NFSv4.x ACLs, but they don't provide the same level of security as ACLs used with an LDAP server. ++There are considerations to keep in mind with ACL functionality in Azure NetApp Files. ++### ACL inheritance ++In Azure NetApp Files, ACL inheritance flags can be used to simplify ACL management with NFSv4.x ACLs. When an inheritance flag is set, ACLs on a parent directory can propagate down to subdirectories and files without further interaction. Azure NetApp Files implements standard ACL inherit behaviors as per [RFC-7530](https://datatracker.ietf.org/doc/html/rfc7530). ++### Deny ACEs ++Deny ACEs in Azure NetApp Files are used to explicitly restrict a user or group from accessing a file or folder. A subset of permissions can be defined to provide granular controls over the deny ACE. These operate in the standard methods mentioned in [RFC-7530](https://datatracker.ietf.org/doc/html/rfc7530). ++### ACL preservation ++When a chmod is performed on a file or folder in Azure NetApp Files, all existing ACEs are preserved on the ACL other than the system ACEs (OWNER@, GROUP@, EVERYONE@). Those ACE permissions are modified as defined by the numeric mode bits defined by the chmod command. Only ACEs that are manually modified or removed via the `nfs4_setfacl` command can be changed. ++### NFSv4.x ACL behaviors in dual-protocol environments ++Dual protocol refers to the use of both SMB and NFS on the same Azure NetApp Files volume. Dual-protocol access controls are determined by which security style the volume is using, but username mapping ensures that Windows users and UNIX users that successfully map to one another have the same access permissions to data. ++When NFSv4.x ACLs are in use on UNIX security style volumes, the following behaviors can be observed when using dual-protocol volumes and accessing data from SMB clients. ++* Windows usernames need to map properly to UNIX usernames for proper access control resolution. +* In UNIX security style volumes (where NFSv4.x ACLs would be applied), if no valid UNIX user exists in the LDAP server for a Windows user to map to, then a default UNIX user called `pcuser` (with uid numeric 65534) is used for mapping. +* Files written with Windows users with no valid UNIX user mapping display as owned by numeric ID 65534, which corresponds to ΓÇ£nfsnobodyΓÇ¥ or ΓÇ£nobodyΓÇ¥ usernames in Linux clients from NFS mounts. This is different from the numeric ID 99 which is typically seen with NFSv4.x ID domain issues. To verify the numeric ID in use, use the `ls -lan` command. +* Files with incorrect owners don't provide expected results from UNIX mode bits or from NFSv4.x ACLs. +* NFSv4.x ACLs are managed from NFS clients. SMB clients can neither view nor manage NFSv4.x ACLs. ++### Umask impact with NFSv4.x ACLs ++[NFSv4 ACLs provide the ability](http://linux.die.net/man/5/nfs4_acl) to offer ACL inheritance. ACL inheritance means that files or folders created beneath objects with NFSv4 ACLs set can inherit the ACLs based on the configuration of the [ACL inheritance flag](http://linux.die.net/man/5/nfs4_acl). ++Umask is used to control the permission level at which files and folders are created in a directory. By default, Azure NetApp Files allows umask to override inherited ACLs, which is expected behavior as per [RFC-7530](https://datatracker.ietf.org/doc/html/rfc7530). ++For more information, see [umask](network-attached-file-permissions-nfs.md#umask). ++### Chmod/chown behavior with NFSv4.x ACLs ++In Azure NetApp Files, you can use change ownership (chown) and change mode bit (chmod) commands to manage file and directory permissions on NFSv3 and NFSv4.x. ++When using NFSv4.x ACLs, the more granular controls applied to files and folder lessens the need for chmod commands. Chown still has a place, as NFSv4.x ACLs don't assign ownership. ++When chmod is run in Azure NetApp Files on files and folders with NFSv4.x ACLs applied, mode bits are changed on the object. In addition, a set of system ACEs are modified to reflect those mode bits. If the system ACEs are removed, then mode bits are cleared. Examples and a more complete description can be found in the section on system ACEs below. ++When chown is run in Azure NetApp Files, the assigned owner can be modified. File ownership isn't as critical when using NFSv4.x ACLs as when using mode bits, as ACLs can be used to control permissions in ways that basic owner/group/everyone concepts couldn't. Chown in Azure NetApp Files can only be run as root (either as root or by using sudo), since export controls are configured to only allow root to make ownership changes. Since this is controlled by a default export policy rule in Azure NetApp Files, NFSv4.x ACL entries that allow ownership modifications don't apply. ++```bash +# su user1 +# chown user1 testdir +chown: changing ownership of ΓÇÿtestdirΓÇÖ: Operation not permitted +# sudo chown user1 testdir +# ls -la | grep testdir +-rw-r--r-- 1 user1 root 0 Jul 12 16:23 testdir +``` ++The export policy rule on the volume can be modified to change this behavior. In the **Export policy** menu for the volume, modify **Chown mode** to "unrestricted." +++Once modified, ownership can be changed by users other than root if they have appropriate access rights. This requires the ΓÇ£Take OwnershipΓÇ¥ NFSv4.x ACL permission (designated by the letter ΓÇ£oΓÇ¥). Ownership can also be changed if the user changing ownership currently owns the file or folder. ++```bash +A::user1@contoso.com:rwatTnNcCy << no ownership flag (o) ++user1@ubuntu:/mnt/testdir$ chown user1 newfile3 +chown: changing ownership of 'newfile3': Permission denied ++A::user1@contoso.com:rwatTnNcCoy << with ownership flag (o) ++user1@ubuntu:/mnt/testdir$ chown user1 newfile3 +user1@ubuntu:/mnt/testdir$ ls -la +total 8 +drwxrwxrwx 2 user2 root 4096 Jul 14 16:31 . +drwxrwxrwx 5 root root 4096 Jul 13 13:46 .. +-rw-r--r-- 1 user1 root 0 Jul 14 15:45 newfile +-rw-r--r-- 1 root root 0 Jul 14 15:52 newfile2 +-rw-r--r-- 1 user1 4294967294 0 Jul 14 16:31 newfile3 +``` ++### System ACEs ++On every ACL, there are a series of system ACEs: OWNER@, GROUP@, EVERYONE@. For example: ++```bash +A::OWNER@:rwaxtTnNcCy +A:g:GROUP@:rwaxtTnNcy +A::EVERYONE@:rwaxtTnNcy +``` ++These ACEs correspond with the classic mode bits permissions you would see in NFSv3 and are directly associated with those permissions. When a chmod is run on an object, these system ACLs change to reflect those permissions. ++```bash +# nfs4_getfacl user1-file ++# file: user1-file +A::user1@CONTOSO.COM:rT +A::OWNER@:rwaxtTnNcCy +A:g:GROUP@:rwaxtTnNcy +A::EVERYONE@:rwaxtTnNcy ++# chmod 755 user1-file ++# nfs4_getfacl user1-file ++# file: user1-file +A::OWNER@:rwaxtTnNcCy +A:g:GROUP@:rxtncy +``` ++If those system ACEs are removed, then the permission view changes such that the normal mode bits (rwx) show up as dashes. ++```bash +# nfs4_setfacl -x A::OWNER@:rwaxtTnNcCy user1-file +# nfs4_setfacl -x A:g:GROUP@:rxtncy user1-file +# nfs4_setfacl -x A::EVERYONE@:rxtncy user1-file +# ls -la | grep user1-file +- 1 user1 group1 0 Jul 12 16:23 user1-file +``` ++Removing system ACEs is a way to further secure files and folders, as only the user and group principals on the ACL (and root) are able to access the object. Removing system ACEs can break applications that rely on mode bit views for functionality. ++### Root user behavior with NFSv4.x ACLs ++Root access with NFSv4.x ACLs can't be limited unless [root is squashed](network-attached-storage-permissions.md#root-squashing). Root squashing is where an export policy rule is configured where root is mapped to an anonymous user to limit access. Root access can be configured from a volume's **Export policy** menu by changing the policy rule of **Root access** to off. ++To configure root squashing, navigate to the **Export policy** menu on the volume then change ΓÇ£Root accessΓÇ¥ to ΓÇ£offΓÇ¥ for the policy rule. +++The effect of disabling root access root squashes to anonymous user `nfsnobody:65534`. Root access is then unable to change ownership. ++```bash +root@ubuntu:/mnt/testdir# touch newfile3 +root@ubuntu:/mnt/testdir# ls -la +total 8 +drwxrwxrwx 2 user2 root 4096 Jul 14 16:31 . +drwxrwxrwx 5 root root 4096 Jul 13 13:46 .. +-rw-r--r-- 1 user1 root 0 Jul 14 15:45 newfile +-rw-r--r-- 1 root root 0 Jul 14 15:52 newfile2 +-rw-r--r-- 1 nobody 4294967294 0 Jul 14 16:31 newfile3 +root@ubuntu:/mnt/testdir# ls -lan +total 8 +drwxrwxrwx 2 1002 0 4096 Jul 14 16:31 . +drwxrwxrwx 5 0 0 4096 Jul 13 13:46 .. +-rw-r--r-- 1 1001 0 0 Jul 14 15:45 newfile +-rw-r--r-- 1 0 0 0 Jul 14 15:52 newfile2 +-rw-r--r-- 1 65534 4294967294 0 Jul 14 16:31 newfile3 +root@ubuntu:/mnt/testdir# chown root newfile3 +chown: changing ownership of 'newfile3': Operation not permitted +``` ++Alternatively, in dual-protocol environments, NTFS ACLs can be used to granularly limit root access. +++## Next steps ++* [Configure NFS clients](configure-nfs-clients.md) +* [Configure access control lists on NFSv4.1 volumes](configure-access-control-lists.md) |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | - ## November 2023 +* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) is now generally available (GA). ++ User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas. ++ This feature is Generally Available in Azure commercial regions and US Gov regions where Azure NetApp Files is available. + * [SMB Continuous Availability (CA)](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) shares now supports MSIX app attach for Azure Virtual Desktop In addition to Citrix App Layering, FSLogix user profiles including FSLogix ODFC containers, and Microsoft SQL Server, Azure NetApp Files now supports [MSIX app attach](../virtual-desktop/create-netapp-files.md) with SMB Continuous Availability shares to enhance resiliency during storage service maintenance operations. Continuous Availability enables SMB transparent failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience. |
azure-resource-manager | Async Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/async-operations.md | If `Azure-AsyncOperation` isn't one of the header values, then look for: > [!NOTE] > Your REST client must accept a minimum URL size of 4 KB for `Azure-AsyncOperation` and `Location`. +> [!NOTE] +> When the `Retry-after` header is not returned, implement your own retry logic by following the Azure guidelines in [this](https://learn.microsoft.com/azure/architecture/best-practices/retry-service-specific#general-rest-and-retry-guidelines) document. + ## Azure-AsyncOperation request and response If you have a URL from the `Azure-AsyncOperation` header value, send a GET request to that URL. Use the value from `Retry-After` to schedule how often to check the status. You'll get a response object that indicates the status of the operation. A different response is returned when checking the status of the operation with the `Location` URL. For more information about the response from a location URL, see [Create storage account (202 with Location and Retry-After)](#create-storage-account-202-with-location-and-retry-after). |
azure-vmware | Configure Azure Elastic San | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md | Title: Configure Azure Elastic SAN (Preview) -description: Learn how to use Elastic SAN with Azure VMware Solution + Title: Use Azure VMware Solution with Azure Elastic SAN Preview +description: Learn how to use Elastic SAN Preview with Azure VMware Solution Previously updated : 11/07/2023 Last updated : 11/16/2023 -# Configure Azure Elastic SAN (Preview) +# Use Azure VMware Solution with Azure Elastic SAN Preview -In this article, learn how to configure Azure Elastic SAN or delete an Elastic SAN-based datastore. +This article explains how to use Azure Elastic SAN Preview as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters. -## What is Azure Elastic SAN --[Azure Elastic storage area network](https://review.learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction?branch=main) (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Azure Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN. Azure Elastic SAN also offers built-in cloud capabilities, like high availability. --[Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/introduction) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters. +Azure Elastic storage area network (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. For more information on Azure Elastic SAN, see [What is Azure Elastic SAN? Preview](../storage/elastic-san/elastic-san-introduction.md). ## Prerequisites The following prerequisites are required to continue. In this section, you create a virtual network for your Elastic SAN. Then you create the Elastic SAN that includes creating at least one volume group and one volume that becomes your VMFS datastore. Next, you set up a Private Endpoint for your Elastic SAN that allows your SDDC to connect to the Elastic SAN volume. Then you're ready to add an Elastic SAN volume as a datastore in your SDDC. 1. Use one of the following instruction options to set up a dedicated virtual network for your Elastic SAN:- - [Azure portal](https://learn.microsoft.com/azure/virtual-network/quick-create-portal) - - [PowerShell](https://learn.microsoft.com/azure/virtual-network/quick-create-powershell) - - [Azure CLI](https://learn.microsoft.com/azure/virtual-network/quick-create-cli) -2. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group: + - [Azure portal](../virtual-network/quick-create-portal.md) + - [Azure PowerShell module](../virtual-network/quick-create-powershell.md) + - [Azure CLI](../virtual-network/quick-create-cli.md) +1. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group: > [!IMPORTANT]- > Make sure to create this Elastic SAN in the same region and availability zone as your SDDC for best performance. + > Create your Elastic SAN in the same region and availability zone as your SDDC for best performance. - [Azure portal](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell) - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli)-3. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN: +1. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN: - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint) - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli) After you provide an External storage address block, you can connect to an Elast ## Connect Elastic SAN 1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **+ Connect Elastic SAN**.-2. Select your **Subscription**, **Resource**, **Volume Group**, **Volume(s)**, and **Client cluster**. -3. From section, "Rename datastore as per VMware requirements", under **Volume name** > **Data store name**, give names to the Elastic SAN volumes. +1. Select your **Subscription**, **Resource**, **Volume Group**, **Volume(s)**, and **Client cluster**. +1. From section, "Rename datastore as per VMware requirements", under **Volume name** > **Data store name**, give names to the Elastic SAN volumes. > [!NOTE] > For best performance, verify that your Elastic SAN volume and SDDC are in the same Region and Availability Zone. After you provide an External storage address block, you can connect to an Elast To delete the Elastic SAN-based datastore, use the following steps from the Azure portal. 1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **Storage list**.-2. Under **Virtual network**, select **Disconnect** to disconnect the datastore from the Cluster(s). -3. Optionally you can delete the volume you previously created in your Elastic SAN. +1. Under **Virtual network**, select **Disconnect** to disconnect the datastore from the Cluster(s). +1. Optionally you can delete the volume you previously created in your Elastic SAN. |
azure-web-pubsub | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/whats-new.md | -On this page, you can read about recent updates about Azure Web PubSub. As we make continuous improvements to the capabilities and developer experience of the service, we welcome any feedback and suggestions. Reach out to the service team at **awps@micrsoft.com** +On this page, you can read about recent updates about Azure Web PubSub. As we make continuous improvements to the capabilities and developer experience of the service, we welcome any feedback and suggestions. Reach out to the service team at **awps@microsoft.com** ## Q4 2023 |
backup | Backup Azure Database Postgresql Flex Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-flex-overview.md | To perform the backup operation: Once the configuration is complete: -1. The Backup recovery point invokes the backup based on the policy schedules on the ARM API of PostgresFlex server, writing data to a secure blob-container with a SAS for enhanced security. +1. The Backup service invokes the backup based on the policy schedules on the ARM API of PostgresFlex server, writing data to a secure blob-container with a SAS for enhanced security. 1. Backup runs independently preventing disruptions during long-running tasks. 1. The retention and recovery point lifecycles align with the backup policies for effective management. -1. During the restore, the Backup recovery point invokes restore on the ARM API of PostgresFlex server using the SAS for asynchronous, nondisruptive recovery. +1. During the restore, the Backup service invokes restore on the ARM API of PostgresFlex server using the SAS for asynchronous, nondisruptive recovery. :::image type="content" source="./media/backup-azure-database-postgresql-flex-overview/backup-process.png" alt-text="Diagram showing the backup process."::: |
backup | Backup Azure Database Postgresql Flex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-flex.md | To configure backup on the Azure PostgreSQL-flex databases using Azure Backup, f 1. Choose one of the Azure PostgreSQL-Flex servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server. :::image type="content" source="./media/backup-azure-database-postgresql-flex/select-resources.png" alt-text="Screenshot showing the select resources option."::: -1. After the selection, the validation starts. The backup readiness check ensures the vault has sufficient permissions for backup operations. Resolve any access issues by selecting **Assign missing roles** action button in the top action menu to grant permissions. - :::image type="content" source="./media/backup-azure-database-postgresql-flex/assign-missing-roles.png" alt-text="Screenshot showing the **Assign missing roles** option."::: -+1. After the selection, the validation starts. The backup readiness check ensures the vault has sufficient permissions for backup operations. Resolve any access issues by granting appropriate [permissions](/azure/backup/backup-azure-database-postgresql-flex-overview) to the vault MSI and re-triggering the validation. 1. Submit the configure backup operation and track the progress under **Backup instances**. |
backup | Backup Azure Sql Manage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-manage-cli.md | F7c68818-039f-4a0f-8d73-e0747e68a813 Restore (Log) Completed master To change the policy underlying the SQL backup configuration, use the [az backup policy set](/cli/azure/backup/policy#az-backup-policy-set) command. The name parameter in this command refers to the backup item whose policy you want to change. Here, replace the policy of the SQL database *sqldatabase;mssqlserver;master* with a new policy *newSQLPolicy*. You can create new policies using the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) command. ```azurecli-interactive-az backup item set policy --resource-group SQLResourceGroup \ +az backup item set-policy --resource-group SQLResourceGroup \ --vault-name SQLVault \ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \ --policy-name newSQLPolicy \ |
confidential-computing | Confidential Containers On Aks Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-on-aks-preview.md | Title: Confidential containers on Azure Kubernetes Service -description: Learn about pod level isolation via confidential containers on Azure Kubernetes Service + Title: Confidential Containers (preview) on Azure Kubernetes Service +description: Learn about pod level isolation using Confidential Containers (preview) on Azure Kubernetes Service -# Confidential containers on Azure Kubernetes Service -With the growth in cloud-native application development, there's an increased need to protect the workloads running in cloud environments as well. Containerizing the workload forms a key component for this programming model, and then, protecting the container is paramount to running confidentially in the cloud. +# Confidential Containers (preview) on Azure Kubernetes Service ++With the growth in cloud-native application development, there's an increased need to protect the workloads running in cloud environments as well. Containerizing the workload forms a key component for this programming model, and then, protecting the container is paramount to running confidentially in the cloud. :::image type="content" source="media/confidential-containers/attack-vectors-conf-containers.png" alt-text="Diagram of various attack vectors that make your cKubernetes container vulnerable."::: +Confidential Containers on Azure Kubernetes Service (AKS) enable container level isolation in your Kubernetes workloads. It's an addition to Azure suite of confidential computing products, and uses the AMD SEV-SNP memory encryption to protect your containers at runtime. -Confidential containers on Azure Kubernetes Service (AKS) enable container level isolation in your Kubernetes workloads. It's an addition to Azure suite of confidential computing products, and uses the AMD SEV-SNP memory encryption to protect your containers at runtime. -Confidential containers are attractive for deployment scenarios that involve sensitive data (for instance, personal data or any data with strong security needed for regulatory compliance). +Confidential Containers are attractive for deployment scenarios that involve sensitive data (for instance, personal data or any data with strong security needed for regulatory compliance). ## What makes a container confidential?-In alignment with the guidelines set by the [Confidential Computing Consortium](https://confidentialcomputing.io/), that Microsoft is a founding member of, confidential containers need to fulfill the following ΓÇô -* Transparency: The confidential container environment where your sensitive application is shared, you can see and verify if it's safe. All components of the Trusted Computing Base (TCB) are to be open sourced. -* Auditability: Customers shall have the ability to verify and see what version of the CoCo environment package including Linux Guest OS and all the components are current. Microsoft signs to the guest OS and container runtime environment for verifications through attestation. It also releases a secure hash algorithm (SHA) of guest OS builds to build a string audibility and control story. -* Full attestation: Anything that is part of the TEE shall be fully measured by the CPU with ability to verify remotely. The hardware report from AMD SEV-SNP processor shall reflect container layers and container runtime configuration hash through the attestation claims. Application can fetch the hardware report locally including the report that reflects Guest OS image and container runtime. ++In alignment with the guidelines set by the [Confidential Computing Consortium](https://confidentialcomputing.io/), that Microsoft is a founding member of, Confidential Containers need to fulfill the following ΓÇô ++* Transparency: The confidential container environment where your sensitive application is shared, you can see and verify if it's safe. All components of the Trusted Computing Base (TCB) are to be open sourced. +* Auditability: You have the ability to verify and see what version of the CoCo environment package including Linux Guest OS and all the components are current. Microsoft signs to the guest OS and container runtime environment for verifications through attestation. It also releases a secure hash algorithm (SHA) of guest OS builds to build a string audibility and control story. +* Full attestation: Anything that is part of the TEE shall be fully measured by the CPU with ability to verify remotely. The hardware report from AMD SEV-SNP processor shall reflect container layers and container runtime configuration hash through the attestation claims. Application can fetch the hardware report locally including the report that reflects Guest OS image and container runtime. * Code integrity: Runtime enforcement is always available through customer defined policies for containers and container configuration, such as immutable policies and container signing. -* Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods. +* Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods. But with these features of confidentiality, the product maintains its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost. -## What forms confidential containers on AKS? -Aligning with MicrosoftΓÇÖs commitment to the open-source community, the underlying stack for confidential containers uses the [Kata CoCo](https://github.com/confidential-containers/confidential-containers) agent as the agent running in the node that hosts the pod running the confidential workload. With many TEE technologies requiring a boundary between the host and guest, [Kata Containers](https://katacontainers.io/) are the basis for the Kata CoCo initial work. Microsoft also contributed back to the Kata Coco community to power containers running inside a confidential utility VM. +## What forms Confidential Containers on AKS? -The Kata confidential container resides within the Azure Linux AKS Container Host. [Azure Linux](https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/announcing-preview-availability-of-the-mariner-aks-container/ba-p/3649154) and the Cloud Hypervisor VMM (Virtual Machine Monitor) is the end-user facing/user space software that is used for creating and managing the lifetime of virtual machines. +Aligning with MicrosoftΓÇÖs commitment to the open-source community, the underlying stack for Confidential Containers uses the [Kata CoCo](https://github.com/confidential-containers/confidential-containers) agent as the agent running in the node that hosts the pod running the confidential workload. With many TEE technologies requiring a boundary between the host and guest, [Kata Containers](https://katacontainers.io/) are the basis for the Kata CoCo initial work. Microsoft also contributed back to the Kata Coco community to power containers running inside a confidential utility VM. ++The Kata confidential container resides within the Azure Linux AKS Container Host. [Azure Linux](../aks/use-azure-linux.md) and the Cloud Hypervisor VMM (Virtual Machine Monitor) is the end-user facing/user space software that is used for creating and managing the lifetime of virtual machines. ## Container level isolation in AKS-In default, AKS all workloads share the same kernel and the same cluster admin. With the preview of Pod Sandboxing on AKS, the isolation grew a notch higher with the ability to provide kernel isolation for workloads on the same AKS node. You can read more about the product [here](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/preview-support-for-kata-vm-isolated-containers-on-aks-for-pod/ba-p/3751557). Confidential containers are the next step of this isolation and it uses the memory encryption capabilities of the underlying AMD SEV-SNP virtual machine sizes. These virtual machines are the [DCa_cc](../../articles/virtual-machines/dcasccv5-dcadsccv5-series.md) and [ECa_cc](../../articles/virtual-machines/ecasccv5-ecadsccv5-series.md) sizes with the capability of surfacing the hardwareΓÇÖs root of trust to the pods deployed on it. +By default, AKS all workloads share the same kernel and the same cluster admin. With the preview of Pod Sandboxing on AKS, the isolation grew a notch higher with the ability to provide kernel isolation for workloads on the same AKS node. You can read more about the feature [here](../aks/use-pod-sandboxing.md). Confidential Containers are the next step of this isolation and it uses the memory encryption capabilities of the underlying AMD SEV-SNP virtual machine sizes. These virtual machines are the [DCa_cc](../virtual-machines/dcasccv5-dcadsccv5-series.md) and [ECa_cc](../virtual-machines/ecasccv5-ecadsccv5-series.md) sizes with the capability of surfacing the hardwareΓÇÖs root of trust to the pods deployed on it. ## Get started-To get started and learn more about supported scenarios, please refer to our AKS documentation [here](https://aka.ms/conf-containers-aks-documentation). - +To get started and learn more about supported scenarios, refer to our AKS documentation [here](../aks/confidential-containers-overview.md). ## Next step -> To learn more about this announcement, checkout our blog [here](https://aka.ms/coco-aks-preview). -> We also have a demo of a confidential container running an end-to-end encrypted messaging system on Kafka [here](https://aka.ms/Ignite2023-ConfContainers-AKS-Preview). +[Deploy a Confidential Container on AKS](../aks/deploy-confidential-containers-default-policy.md). |
confidential-computing | Skr Flow Confidential Vm Sev Snp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-vm-sev-snp.md | -The below article describes how to perform a Secure Key Release from Azure Key Value when your applications are running with an AMD SEV-SNP confidential. To learn more about Secure Key Release and Azure Confidential Computing, [go here.](./concept-skr-attestation.md). +The below article describes how to perform a Secure Key Release from Azure Key Vault when your applications are running with an AMD SEV-SNP based confidential virtual machine. To learn more about Secure Key Release and Azure Confidential Computing, [go here.](./concept-skr-attestation.md). SKR requires that an application performing SKR shall go through a remote guest attestation flow using Microsoft Azure Attestation (MAA) as described [here](guest-attestation-confidential-vms.md). SKR requires that an application performing SKR shall go through a remote guest To allow Azure Key Vault to release a key to an attested confidential virtual machine, there are certain steps that need to be followed: -1. Assign a managed identity to the confidential virtual machine. System-assigned managed identity or a user-assigned managed identity are allowed. -1. Set a Key Vault access policy to grant the managed identity the "release" key permission. A policy allows the confidential virtual machine to access the Key Vault and perform the release operation. If using Key Vault Managed HSM, assign "Managed HSM Crypto Service Release User" role membership. -1. Create a Key Vault key that is marked as exportable and has an associated release policy. Key release policy associates the key to an attested confidential virtual machine and that the key can only be used for the desired purpose. -1. To perform the release, send an HTTP request to the Key Vault from the confidential virtual machine. HTTP request must include the Confidential VMs attested platform report in the request body. The attested platform report is used to verify the trustworthiness of the state of the Trusted Execution Environment-enabled platform, such as the Confidential VM. The Microsoft Azure Attestation service can be used to create the attested platform report and include it in the request. +1. Assign a managed identity to the confidential virtual machine. System-assigned managed identity or a user-assigned managed identity are supported. +1. Set a Key Vault access policy to grant the managed identity the "release" key permission. A policy allows the confidential virtual machine to access the Key Vault and perform the release operation. If using Key Vault Managed HSM, assign the "Managed HSM Crypto Service Release User" role membership. +1. Create a Key Vault key that is marked as exportable and has an associated release policy. The key release policy associates the key to an attested confidential virtual machine and that the key can only be used for the desired purpose. +1. To perform the release, send an HTTP request to the Key Vault from the confidential virtual machine. The HTTP request must include the Confidential VMs attested platform report in the request body. The attested platform report is used to verify the trustworthiness of the state of the Trusted Execution Environment-enabled platform, such as the Confidential VM. The Microsoft Azure Attestation service can be used to create the attested platform report and include it in the request. ![Diagram of the aforementioned operations, which we'll be performing.](media/skr-flow-confidential-vm-sev-snp-attestation/overview.png) To enable system-assigned managed identity on a CVM, your account needs the [Vir ## Add the access policy to Azure Key Vault -Once you turn on a system-assigned managed identity for your CVM, you have to provide it with access to the Azure Key Vault data plane where key objects are stored. To ensure that only our confidential virtual machine can execute the release operation, we'll only grant specific permission required for that. +Once you enable a system-assigned managed identity for your CVM, you have to provide it with access to the Azure Key Vault data plane where key objects are stored. To ensure that only our confidential virtual machine can execute the release operation, we'll only grant the specific permission required. > [!NOTE] > You can find the managed identity object ID in the virtual machine identity options, in the Azure portal. Alternatively you can retrieve it with [PowerShell](../active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md), [Azure CLI](../active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-cli.md), Bicep or ARM templates. A [open sourced](https://github.com/Azure/confidential-computing-cvm-guest-attes ### Guest Attestation result -The result from the Guest Attestation client simply is a base64 encoded string! This encoded string value is a signed JSON Web Token (__JWT__), with a header, body and signature. You can split the string by the `.` (dot) value and base64 decode the results. +The result from the Guest Attestation client simply is a base64 encoded string. This encoded string value is a signed JSON Web Token (__JWT__), with a header, body and signature. You can split the string by the `.` (dot) value and base64 decode the results. ```text eyJhbGciO... Here we have another header, though this one has a [X.509 certificate chain](htt } ``` -You can read from the "`x5c`" array in PowerShell if you wanted to, this can help you verify that this is a valid certificate. Below is an example: +You can read from the "`x5c`" array in PowerShell, this can help you verify that this is a valid certificate. Below is an example: ```powershell $certBase64 = "MIIIfDCCBmSgA..XQ==" |
container-apps | Start Serverless Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/start-serverless-containers.md | -Serverless computing offers services that manage and maintain servers, which relive you of the burden of physically operating servers yourself. Azure Container Apps is a serverless platform that handles scaling, security, and infrastructure management for you - all while reducing costs. Once freed from server-related concerns, you're able to spend your time focusing on your application code. +Serverless computing offers services that manage and maintain servers, which relieve you of the burden of physically operating servers yourself. Azure Container Apps is a serverless platform that handles scaling, security, and infrastructure management for you - all while reducing costs. Once freed from server-related concerns, you're able to spend your time focusing on your application code. Container Apps make it easy to manage: |
cosmos-db | Troubleshoot Dotnet Sdk Request Timeout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-request-timeout.md | description: Learn how to diagnose and fix .NET SDK request timeout exceptions. Previously updated : 02/15/2023 Last updated : 11/16/2023 If you use an HTTP proxy, make sure it can support the number of connections con ### Create multiple client instances -Creating multiple client instances might lead to connection contention and timeout issues. +Creating multiple client instances might lead to connection contention and timeout issues. The [Diagnostics](./troubleshoot-dotnet-sdk.md#capture-diagnostics) contain two relevant properties: ++```json +{ + "NumberOfClientsCreated":X, + "NumberOfActiveClients":Y, +} +``` ++`NumberOfClientsCreated` tracks the number of times a `CosmosClient` was created within the same AppDomain, and `NumberOfActiveClients` tracks the active clients (not disposed). The expectation is that if the singleton pattern is followed, `X` would match the number of accounts the application works with and that `X` is equal to `Y`. ++If `X` is greater than `Y`, it means the application is creating and disposing client instances. This can lead to [connection contention](#socket-or-port-availability-might-be-low) and/or [CPU contention](#high-cpu-utilization). #### Solution -Follow the [performance tips](performance-tips-dotnet-sdk-v3.md#sdk-usage), and use a single CosmosClient instance across an entire process. +Follow the [performance tips](performance-tips-dotnet-sdk-v3.md#sdk-usage), and use a single CosmosClient instance per account across an entire process. Avoid creating and disposing clients. ### Hot partition key |
cosmos-db | Tutorial Log Transformation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-log-transformation.md | In this tutorial, you learn how to: To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](../azure-monitor/logs/manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../azure-monitor/essentials/data-collection-rule-overview.md#permissions) in the workspace.+- [Permissions to create DCR objects](../azure-monitor/essentials/data-collection-rule-create-edit.md#permissions) in the workspace. - A table that already has some data. - The table can't be linked to the [workspace transformation DCR](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr). |
cost-management-billing | Reservation Exchange Policy Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-exchange-policy-changes.md | Azure savings plan for compute was launched in October 2022 to provide you with You can continue to use instance size flexibility for VM sizes, but Microsoft is ending exchanges for regions and instance series for these Azure compute reservations. -The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. +The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed USD 50,000 in a 12-month rolling window for a billing profile or single enrollment. A compute reservation exchange for another compute reservation exchange is similar to, but not the same as a reservation [trade-in](../savings-plan/reservation-trade-in.md) for a savings plan. The difference is that you can always trade in your Azure reserved instances for compute for a savings plan. There's no time limit for trade-ins. |
data-factory | Connector Amazon Marketplace Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md | This Amazon Marketplace Web Service connector is supported for the following cap |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Amazon Rds For Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-oracle.md | This Amazon RDS for Oracle connector is supported for the following capabilities |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Amazon Rds For Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-sql-server.md | This Amazon RDS for SQL Server connector is supported for the following capabili |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Stored procedure activity](transform-data-using-stored-procedure.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Amazon Redshift | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-redshift.md | This Amazon Redshift connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Amazon S3 Compatible Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md | This Amazon S3 Compatible Storage connector is supported for the following capab |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, this Amazon S3 Compatible Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3. You can use this Amazon S3 Compatible Storage connector to copy data from any S3-compatible storage provider. Specify the corresponding service URL in the linked service configuration. |
data-factory | Connector Amazon Simple Storage Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md | This Amazon S3 connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, this Amazon S3 connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). You can also choose to [preserve file metadata during copy](#preserve-metadata-during-copy). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3. |
data-factory | Connector Appfigures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-appfigures.md | This AppFigures connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Asana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md | This Asana connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Azure Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md | This Azure Blob Storage connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②|Γ£ô <small> Exclude storage account V1| |[Delete activity](delete-activity.md)|① ②|Γ£ô <small> Exclude storage account V1| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For the Copy activity, this Blob storage connector supports: |
data-factory | Connector Azure Cosmos Analytical Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-analytical-store.md | This Azure Cosmos DB for NoSQL connector is supported for the following capabili |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① |Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* ## Mapping data flow properties |
data-factory | Connector Azure Cosmos Db Mongodb Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md | This Azure Cosmos DB for MongoDB connector is supported for the following capabi || --| --| |[Copy activity](copy-activity-overview.md) (source/sink)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* You can copy data from Azure Cosmos DB for MongoDB to any supported sink data store, or copy data from any supported source data store to Azure Cosmos DB for MongoDB. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Azure Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md | This Azure Cosmos DB for NoSQL connector is supported for the following capabili |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① |Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For Copy activity, this Azure Cosmos DB for NoSQL connector supports: |
data-factory | Connector Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md | This Azure Data Explorer connector is supported for the following capabilities: |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① | |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* You can copy data from any supported source data store to Azure Data Explorer. You can also copy data from Azure Data Explorer to any supported sink data store. For a list of data stores that the copy activity supports as sources or sinks, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Azure Data Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md | This Azure Data Lake Storage Gen2 connector is supported for the following capab |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②|Γ£ô | |[Delete activity](delete-activity.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For Copy activity, with this connector you can: |
data-factory | Connector Azure Data Lake Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md | This Azure Data Lake Storage Gen1 connector is supported for the following capab |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, with this connector you can: |
data-factory | Connector Azure Database For Mariadb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mariadb.md | This Azure Database for MariaDB connector is supported for the following capabil |[Copy activity](copy-activity-overview.md) (source/-)|① ②|Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* You can copy data from Azure Database for MariaDB to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Azure Database For Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md | This Azure Database for MySQL connector is supported for the following capabilit |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① |Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* ## Getting started The below table lists the properties supported by Azure Database for MySQL sourc | Stored procedure | If you select Stored procedure as input, specify a name of the stored procedure to read data from the source table, or select Refresh to ask the service to discover the procedure names.| Yes (if you select Stored procedure as input) | String | procedureName | | Procedure parameters | If you select Stored procedure as input, specify any input parameters for the stored procedure in the order set in the procedure, or select Import to import all procedure parameters using the form `@paraName`. | No | Array | inputs | | Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize |-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel | +| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel | #### Azure Database for MySQL source script example |
data-factory | Connector Azure Database For Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md | This Azure Database for PostgreSQL connector is supported for the following capa |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① |Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* The three activities work on all Azure Database for PostgreSQL deployment options: The below table lists the properties supported by Azure Database for PostgreSQL | Stored procedure | If you select Stored procedure as input, specify a name of the stored procedure to read data from the source table, or select Refresh to ask the service to discover the procedure names.| Yes (if you select Stored procedure as input) | String | procedureName | | Procedure parameters | If you select Stored procedure as input, specify any input parameters for the stored procedure in the order set in the procedure, or select Import to import all procedure parameters using the form `@paraName`. | No | Array | inputs | | Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize |-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel | +| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel | #### Azure Database for PostgreSQL source script example |
data-factory | Connector Azure Databricks Delta Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-databricks-delta-lake.md | This Azure Databricks Delta Lake connector is supported for the following capabi |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① | |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* In general, the service supports Delta Lake with the following capabilities to meet your various needs. |
data-factory | Connector Azure File Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md | This Azure Files connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②|Γ£ô <small> Exclude storage account V1| |[Delete activity](delete-activity.md)|① ②|Γ£ô <small> Exclude storage account V1| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* You can copy data from Azure Files to any supported sink data store, or copy data from any supported source data store to Azure Files. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Azure Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-search.md | This Azure Cognitive Search connector is supported for the following capabilitie || --| --| |[Copy activity](copy-activity-overview.md) (-/sink)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* You can copy data from any supported source data store into search index. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Azure Sql Data Warehouse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md | This Azure Synapse Analytics connector is supported for the following capabiliti |[Script activity](transform-data-using-script.md)|① ②|Γ£ô | |[Stored procedure activity](transform-data-using-stored-procedure.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For Copy activity, this Azure Synapse Analytics connector supports these functions: |
data-factory | Connector Azure Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md | This Azure SQL Database connector is supported for the following capabilities: |[Script activity](transform-data-using-script.md)|① ②|Γ£ô | |[Stored procedure activity](transform-data-using-stored-procedure.md)|① ②|Γ£ô | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For Copy activity, this Azure SQL Database connector supports these functions: |
data-factory | Connector Azure Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md | This Azure SQL Managed Instance connector is supported for the following capabil |[Script activity](transform-data-using-script.md)|① ②|Γ£ô <small> Public preview | |[Stored procedure activity](transform-data-using-stored-procedure.md)|① ②|Γ£ô <small> Public preview | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For Copy activity, this Azure SQL Database connector supports these functions: The below table lists the properties supported by Azure SQL Managed Instance sou | Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |- | | Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize |-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel | +| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel | | Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- | | Incremental column | When using the incremental extract feature, you must choose the date/time or numeric column that you wish to use as the watermark in your source table. | No | - |- | | Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL MI before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- | |
data-factory | Connector Azure Table Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md | This Azure Table storage connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|① ②|Γ£ô <small> Exclude storage account V1| |[Lookup activity](control-flow-lookup-activity.md)|① ②|Γ£ô <small> Exclude storage account V1| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* You can copy data from any supported source data store to Table storage. You also can copy data from Table storage to any supported sink data store. For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Cassandra | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-cassandra.md | This Cassandra connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Concur | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md | This Concur connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Couchbase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md | This Couchbase connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Dataworld | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md | This data.world connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Db2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md | This DB2 connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Drill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-drill.md | This Drill connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Dynamics Ax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md | This Dynamics AX connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that supports as sources and sinks, see [Supported data stores](connector-overview.md#supported-data-stores). |
data-factory | Connector Dynamics Crm Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md | This connector is supported for the following activities: |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① | |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector File System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md | This file system connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, this file system connector supports: |
data-factory | Connector Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md | This FTP connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, this FTP connector supports: |
data-factory | Connector Google Adwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md | This Google AdWords connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Google Bigquery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md | This Google BigQuery connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Google Cloud Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md | This Google Cloud Storage connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, this Google Cloud Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). It takes advantage of GCS's S3-compatible interoperability. |
data-factory | Connector Google Sheets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-sheets.md | This Google Sheets connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Greenplum | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-greenplum.md | This Greenplum connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Hbase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md | This HBase connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Hdfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md | This HDFS connector is supported for the following capabilities: |[Lookup activity](control-flow-lookup-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, the HDFS connector supports: |
data-factory | Connector Hive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md | This Hive connector is supported for the following capabilities: |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Http | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-http.md | This HTTP connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores). |
data-factory | Connector Hubspot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hubspot.md | This HubSpot connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks , see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Impala | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-impala.md | This Impala connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Informix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-informix.md | This Informix connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Jira | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-jira.md | This Jira connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Magento | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-magento.md | This Magento connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Mariadb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md | This MariaDB connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Marketo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-marketo.md | This Marketo connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Microsoft Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-access.md | This Microsoft Access connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Microsoft Fabric Lakehouse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md | This Microsoft Fabric Lakehouse connector is supported for the following capabil |[Copy activity](copy-activity-overview.md) (source/sink)|① ②|Γ£ô | |[Mapping data flow](concepts-data-flow-overview.md) (-/sink)|① |- | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* ## Get started |
data-factory | Connector Mongodb Atlas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md | This MongoDB Atlas connector is supported for the following capabilities: || --| |[Copy activity](copy-activity-overview.md) (source/sink)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Mongodb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md | This MongoDB connector is supported for the following capabilities: || --| |[Copy activity](copy-activity-overview.md) (source/sink)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md | This MySQL connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Netezza | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-netezza.md | This Netezza connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Odata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md | This OData connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores). |
data-factory | Connector Odbc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md | This ODBC connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md | This Microsoft 365 (Office 365) connector is supported for the following capabil |[Copy activity](copy-activity-overview.md) (source/-)|①| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|①| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* ADF Microsoft 365 (Office 365) connector and Microsoft Graph Data Connect enables at scale ingestion of different types of datasets from Exchange Email enabled mailboxes, including address book contacts, calendar events, email messages, user information, mailbox settings, and so on. Refer [here](/graph/data-connect-datasets) to see the complete list of datasets available. |
data-factory | Connector Oracle Cloud Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md | This Oracle Cloud Storage connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, this Oracle Cloud Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). It takes advantage of Oracle Cloud Storage's S3-compatible interoperability. |
data-factory | Connector Oracle Eloqua | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-eloqua.md | This Oracle Eloqua connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Oracle Responsys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-responsys.md | This Oracle Responsys connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Oracle Service Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-service-cloud.md | This Oracle Service Cloud connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle.md | This Oracle connector is supported for the following capabilities: |[Lookup activity](control-flow-lookup-activity.md)|① ②| |[Script activity](transform-data-using-script.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Paypal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-paypal.md | This PayPal connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Phoenix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md | This Phoenix connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md | This PostgreSQL connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Presto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-presto.md | This Presto connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Quickbase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbase.md | This Quickbase connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Quickbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md | This QuickBooks connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md | This REST connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|① ②| |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores). |
data-factory | Connector Salesforce Marketing Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-marketing-cloud.md | This Salesforce Marketing Cloud connector is supported for the following capabil |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Salesforce Service Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md | This Salesforce Service Cloud connector is supported for the following capabilit |[Copy activity](copy-activity-overview.md) (source/sink)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Salesforce | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md | This Salesforce connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Sap Business Warehouse Open Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md | This SAP Business Warehouse Open Hub connector is supported for the following ca |[Copy activity](copy-activity-overview.md) (source/-)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Sap Business Warehouse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md | This SAP Business Warehouse connector is supported for the following capabilitie |[Copy activity](copy-activity-overview.md) (source/-)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Sap Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md | This SAP CDC connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|①, ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* This SAP CDC connector uses the SAP ODP framework to extract data from SAP source systems. For an introduction to the architecture of the solution, read [Introduction and architecture to SAP change data capture (CDC)](sap-change-data-capture-introduction-architecture.md) in our [SAP knowledge center](industry-sap-overview.md). |
data-factory | Connector Sap Cloud For Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-cloud-for-customer.md | This SAP Cloud for Customer connector is supported for the following capabilitie |[Copy activity](copy-activity-overview.md) (source/sink)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Sap Ecc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md | This SAP ECC connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Sap Hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md | This SAP HANA connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/sink)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Sap Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md | This SAP table connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of the data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md | This ServiceNow connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Sftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sftp.md | This SFTP connector is supported for the following capabilities: |[GetMetadata activity](control-flow-get-metadata-activity.md)|① ②| |[Delete activity](delete-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* Specifically, the SFTP connector supports: |
data-factory | Connector Sharepoint Online List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md | This SharePoint Online List connector is supported for the following capabilitie |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Shopify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md | This Shopify connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Smartsheet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-smartsheet.md | This Smartsheet connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Snowflake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md | This Snowflake connector is supported for the following capabilities: |[Lookup activity](control-flow-lookup-activity.md)|① ②| |[Script activity](transform-data-using-script.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For the Copy activity, this Snowflake connector supports the following functions: |
data-factory | Connector Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-spark.md | This Spark connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md | This SQL Server connector is supported for the following capabilities: |[Script activity](transform-data-using-script.md)|① ②| |[Stored procedure activity](transform-data-using-stored-procedure.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. The below table lists the properties supported by SQL Server source. You can edi | Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |- | | Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize |-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel | +| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel | | Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- | | Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- | | Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on SQL Server before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- | |
data-factory | Connector Square | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-square.md | This Square connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Sybase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sybase.md | This Sybase connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Teamdesk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md | This TeamDesk connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Teradata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teradata.md | This Teradata connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-twilio.md | This Twilio connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Vertica | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md | This Vertica connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table. |
data-factory | Connector Web Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-web-table.md | This Web table connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|②| |[Lookup activity](control-flow-lookup-activity.md)|②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Xero | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md | This Xero connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Zendesk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zendesk.md | This Zendesk connector is supported for the following capabilities: || --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|① | -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Connector Zoho | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zoho.md | This Zoho connector is supported for the following capabilities: |[Copy activity](copy-activity-overview.md) (source/-)|① ②| |[Lookup activity](control-flow-lookup-activity.md)|① ②| -<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> +*① Azure integration runtime ② Self-hosted integration runtime* For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. |
data-factory | Copy Activity Schema And Type Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md | Copy activity currently supports the following interim data types: Boolean, Byte The following data type conversions are supported between the interim types from source to sink. -| Source\Sink | Boolean | Byte array | Decimal | Date/Time <small>(1)</small> | Float-point <small>(2)</small> | GUID | Integer <small>(3)</small> | String | TimeSpan | +| Source\Sink | Boolean | Byte array | Decimal | Date/Time (1)</small> | Float-point <small>(2)</small> | GUID | Integer <small>(3) | String | TimeSpan | | -- | - | - | - | - | | - | -- | | -- | | Boolean | Γ£ô | | Γ£ô | | Γ£ô | | Γ£ô | Γ£ô | | | Byte array | | Γ£ô | | | | | | Γ£ô | | |
data-manager-for-agri | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md | Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st [!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)] +## November 2023 ++### LLM capability +Our LLM capability enables seamless selection of APIs mapped to farm operations today. This enables use cases that are based on tillage, planting, applications and harvesting type of farm operations. In the time to come we'll add the capability to select APIs mapped to soil sensors, weather, and imagery type of data. The skills in our LLM capability allow for a combination of results, calculation of area, ranking, summarizing to help serve customer prompts. These capabilities enable others to build their own agriculture copilots that deliver insights to farmers. Learn more about this [here](concepts-llm-apis.md). + ## October 2023 -### Azure portal experience enhancement: +### Azure portal experience enhancement We released a new user friendly experience to install ISV solutions that are available for Azure Data Manager for Agriculture users. You can now go to your Azure Data Manager for Agriculture instance on the Azure portal, view and install available solutions in a seamless user experience. Today the ISV solutions available are from Bayer AgPowered services, you can see the marketplace listing [here](https://azuremarketplace.microsoft.com/marketplace/apps?search=bayer&page=1). You can learn more about installing ISV solutions [here](how-to-set-up-isv-solution.md). ## July 2023 -### Weather API update: +### Weather API update We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs are replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather). -### New farm operations connector: +### New farm operations connector We added support for Climate FieldView as a built-in data source. You can now auto sync planting, application and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md). -### Common Data Model now with geo-spatial support: -We updated our data model to improve flexibility. The boundary object has been deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md). +### Common Data Model now with geo-spatial support +We updated our data model to improve flexibility. The boundary object is deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md). ## June 2023 |
defender-for-cloud | Agentless Container Registry Vulnerability Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md | Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi | [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 | | [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | -- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).+- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md). - **Query scan results via REST API** - Learn how to query scan results via [REST API](subassessment-rest-api.md). - **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). - **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md). |
defender-for-cloud | Attack Path Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md | - Title: Reference list of attack paths and cloud security graph components -description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. -- Previously updated : 09/05/2023---# Reference list of attack paths and cloud security graph components --This article lists the attack paths, connections, and insights used in Defender Cloud Security Posture Management (CSPM). --- You need to [enable Defender CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths.-- What you see in your environment depends on the resources you're protecting, and your customized configuration.--Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md). --## Attack paths --### Azure VMs --Prerequisite: For a list of prerequisites, see the [Availability table](how-to-manage-attack-path.md#availability) for attack paths. --| Attack path display name | Attack path description | -|--|--| -| Internet exposed VM has high severity vulnerabilities | A virtual machine is reachable from the internet and has high severity vulnerabilities. | -| Internet exposed VM has high severity vulnerabilities and high permission to a subscription | A virtual machine is reachable from the internet, has high severity vulnerabilities, and identity and permission to a subscription. | -| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine is reachable from the internet, has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | -| Internet exposed VM has high severity vulnerabilities and read permission to a data store | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a data store. | -| Internet exposed VM has high severity vulnerabilities and read permission to a Key Vault | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a key vault. | -| VM has high severity vulnerabilities and high permission to a subscription | A virtual machine has high severity vulnerabilities and has high permission to a subscription. | -| VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/>Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | -| VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. | -| VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. | -| Internet exposed VM has high severity vulnerability and insecure SSH private key that can authenticate to another VM | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to another AWS EC2 instance | -| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | -| VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | -| VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account | -| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account | --### AWS EC2 instances --Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md). --| Attack path display name | Attack path description | -|--|--| -| Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to an account. | -| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to a database. | -| Internet exposed EC2 instance has high severity vulnerabilities and read permission to S3 bucket | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | -| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data | An AWS EC2 instance is reachable from the internet has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket containing sensitive data via an IAM policy, or via a bucket policy, or via both an IAM policy and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | -| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a KMS | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an AWS Key Management Service (KMS) via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM policy and an AWS KMS policy.| -| Internet exposed EC2 instance has high severity vulnerabilities | An AWS EC2 instance is reachable from the internet and has high severity vulnerabilities. | -| EC2 instance with high severity vulnerabilities has high privileged permissions to an account | An AWS EC2 instance has high severity vulnerabilities and has permissions to an account. | -| EC2 instance with high severity vulnerabilities has read permissions to a data store |An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket via an IAM policy or via a bucket policy, or via both an IAM policy and a bucket policy. | -| EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | -| EC2 instance with high severity vulnerabilities has read permissions to a KMS key | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an AWS Key Management Service (KMS) key via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM and AWS KMS policy. | -| Internet exposed EC2 instance has high severity vulnerability and insecure SSH private key that can authenticate to another AWS EC2 instance | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to another AWS EC2 instance | -| Internet exposed EC2 instance has high severity vulnerabilities and has insecure secret that is used to authenticate to a RDS resource | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource | -| EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource | -| Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both | --### GCP VM Instances --| Attack path display name | Attack path description | -|--|--| -| Internet exposed VM instance has high severity vulnerabilities | GCP VM instance '[VMInstanceName]' is reachable from the internet and has high severity vulnerabilities [Remote Code Execution]. | -| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. | -| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities allowing remote code execution on the machine and assigned with Service Account with read permission to GCP Storage bucket '[BucketName]' containing sensitive data. | -| Internet exposed VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'. | -| Internet exposed VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. | -| Internet exposed VM instance has high severity vulnerabilities and a hosted database installed | GCP VM instance '[VMInstanceName]' with a hosted [DatabaseType] database is reachable from the internet and has high severity vulnerabilities. | -| Internet exposed VM with high severity vulnerabilities has plaintext SSH private key | GCP VM instance '[MachineName]' is reachable from the internet, has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. | -| VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. | -| VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities [Remote Code Execution] and has read permissions to GCP Storage bucket '[BucketName]' containing sensitive data. | -| VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'.| -| VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. | -| VM instance with high severity vulnerabilities has plaintext SSH private key | GCP VM instance to align with all other attack paths. Virtual machine '[MachineName]' has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. | --### Azure data --| Attack path display name | Attack path description | -|--|--| -| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | -| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | -| SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)| -| SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)| -| Managed database with excessive internet exposure allows basic (local user/password) authentication | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. | -| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. | -| Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. | -| Internet exposed VM has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.| -| Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container | An internal Azure storage container replicates its data to another Azure storage container that is reachable from the internet and allows public access, and poses this data at risk. | -| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md).| --### AWS data --| Attack path display name | Attack path description | -|--|--| -| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | -|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). | -|Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | -|SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | -| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | -| Managed database with excessive internet exposure allows basic (local user/password) authentication | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. | -| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks.| -|Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. | -|Internet exposed EC2 instance has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.| -| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket | An internal AWS S3 bucket replicates its data to another S3 bucket which is reachable from the internet and allows public access, and poses this data at risk. | -| RDS snapshot is publicly available to all AWS accounts (Preview) | A snapshot of an RDS instance or cluster is publicly accessible by all AWS accounts. | -| Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute | -| Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) | -| SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute | -| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) | -| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket is replicating data to internet exposed and publicly accessible AWS S3 bucket | -| Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket| -| RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts | --### GCP data --| Attack path display name | Attack path description | -|--|--| -| GCP Storage Bucket with sensitive data is publicly accessible | GCP Storage Bucket [BucketName] with sensitive data allows public read access without authorization required. | --### Azure containers --Prerequisite: [Enable agentless container posture](concept-agentless-containers.md). This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. --| Attack path display name | Attack path description | -|--|--| -| Internet exposed Kubernetes pod is running a container with RCE vulnerabilities | An internet exposed Kubernetes pod in a namespace is running a container using an image that has vulnerabilities allowing remote code execution. | -| Kubernetes pod running on an internet exposed node uses host network is running a container with RCE vulnerabilities | A Kubernetes pod in a namespace with host network access enabled is exposed to the internet via the host network. The pod is running a container using an image that has vulnerabilities allowing remote code execution. | --### Azure DevOps repositories --Prerequisite: [Enable DevOps Security in Defender for Cloud](defender-for-devops-introduction.md). --| Attack path display name | Attack path description | -|--|--| -| Internet exposed Azure DevOps repository with plaintext secret is publicly accessible | An Azure DevOps repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. | --### GitHub repositories --Prerequisite: [Enable DevOps Security in Defender for Cloud](defender-for-devops-introduction.md). --| Attack path display name | Attack path description | -|--|--| -| Internet exposed GitHub repository with plaintext secret is publicly accessible | A GitHub repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. | --### APIs - -Prerequisite: [Enable Defender for APIs](defender-for-apis-deploy.md). - -| Attack path display name | Attack path description | -|--|--| -| Internet exposed APIs that are unauthenticated carry sensitive data | Azure API Management API is reachable from the internet, contains sensitive data and has no authentication enabled resulting in attackers exploiting APIs for data exfiltration. | --## Cloud security graph components list --This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md). --### Insights --| Insight | Description | Supported entities | -|--|--|--| -| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance, GCP VM instance, GCP SQL admin instance | -| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance, Azure MariaDB Single Server, Azure MySQL Single Server, Azure MySQL Flexible Server, Synapse Workspace, Azure PostgreSQL Single Server, Azure SQL Managed Instance | -| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | MDC Sensitive data discovery:<br /><br />Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server (preview), Azure SQL Database (preview), RDS Instance (preview), RDS Instance Database (preview), RDS Cluster (preview)<br /><br />Purview Sensitive data discovery (preview):<br /><br />Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts, GCP cloud storage bucket | -| Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | -| Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | -| Has tags | Lists the resource tags of the cloud resource | All Azure, AWS, and GCP resources | -| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 | -| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, Azure DevOps repository, GitHub repository, GCP cloud storage bucket | -| Doesn't have MFA enabled | Indicates that the user account does not have a multifactor authentication solution enabled | Microsoft Entra user account, IAM user | -| Is external user | Indicates that the user account is outside the organization's domain | Microsoft Entra user account | -| Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity | -| Contains common usernames | Indicates that a SQL server has user accounts with common usernames which are prone to brute force attacks. | SQL VM, Arc-Enabled SQL VM | -| Can execute code on the host | Indicates that a SQL server allows executing code on the underlying VM using a built-in mechanism such as xp_cmdshell. | SQL VM, Arc-Enabled SQL VM | -| Has vulnerabilities | Indicates that the resource SQL server has vulnerabilities detected | SQL VM, Arc-Enabled SQL VM | -| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | -| Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | -| Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod | -| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image, GCP VM instance | -| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image, GCP VM instance | -| Public IP metadata | Lists the metadata of an Public IP | Public IP | -| Identity metadata | Lists the metadata of an identity | Microsoft Entra identity | --### Connections --| Connection | Description | Source entity types | Destination entity types | -|--|--|--|--| -| Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Microsoft Entra managed identity | -| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Microsoft Entra user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources| -| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server, RDS Cluster, RDS Instance, GCP project, GCP Folder, GCP Organization | All Azure, AWS, and GCP resources, All Kubernetes entities, All DevOps entities, Azure SQL database, RDS Instance, RDS Instance Database | -| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service, GCP VM instance, GCP instance group | -| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod | -| Member of | Indicates that the source identity is a member of the target identities group | Microsoft Entra group, Microsoft Entra user | Microsoft Entra group | -| Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod | --## Next steps --- [Identify and analyze risks across your environment](concept-attack-path.md)-- [Identify and remediate attack paths](how-to-manage-attack-path.md)-- [Cloud security explorer](how-to-manage-cloud-security-explorer.md) |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | The following table summarizes each plan and their cloud availability. | [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | -DevOps security features under the Defender CSPM plan will remain free until March 1, 2024. Defender CSPM DevOps security features include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more. |
defender-for-cloud | Concept Data Security Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md | Defender CSPM provides visibility and contextual insights into your organization Attack path analysis helps you to address security issues that pose immediate threats, and have the greatest potential for exploit in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate the risks. -You can discover risk of data breaches by attack paths of internet-exposed VMs that have access to sensitive data stores. Hackers can exploit exposed VMs to move laterally across the enterprise to access these stores. Review [attack paths](attack-path-reference.md#attack-paths). +You can discover risk of data breaches by attack paths of internet-exposed VMs that have access to sensitive data stores. Hackers can exploit exposed VMs to move laterally across the enterprise to access these stores. ### Cloud Security Explorer |
defender-for-cloud | Concept Integration 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-integration-365.md | + + Title: Alerts and incidents in Microsoft 365 Defender +description: Learn about the benefits of receiving Microsoft Defender for Cloud's alerts in Microsoft 365 Defender + Last updated : 11/16/2023+++# Alerts and incidents in Microsoft 365 Defender ++Microsoft Defender for Cloud's integration with Microsoft 365 Defender allows security teams to access Defender for Cloud alerts and incidents within the Microsoft 365 Defender portal. This integration provides richer context to investigations that span cloud resources, devices, and identities. ++The partnership with Microsoft 365 Defender allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. This is achieved through immediate correlations of alerts and incidents. ++Microsoft 365 Defender offers a comprehensive solution that combines protection, detection, investigation, and response capabilities to protect against attacks on device, email, collaboration, identity, and cloud apps. Our detection and investigation capabilities are now extended to cloud entities, offering security operations teams a single pane of glass to significantly improve their operational efficiency. ++Incidents and alerts are now part of [Microsoft 365 Defender's public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal. ++## Investigation experience in Microsoft 365 Defender ++The following table describes the detection and investigation experience in Microsoft 365 Defender with Defender for Cloud alerts. ++| Area | Description | +|--|--| +| Incidents | All Defender for Cloud incidents are integrated to Microsoft 365 Defender. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue?view=o365-worldwide) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. | +| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft 365 Defender. Defenders for Cloud alerts show on the Microsoft 365 Defender [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response?view=o365-worldwide). <br> <br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.| +| Alert and incident correlation | Alerts and incidents are automatically correlated, providing robust context to security operations teams to understand the complete attack story in their cloud environment. | +| Threat detection | Accurate matching of virtual entities to device entities to ensure precision and effective threat detection. | +| Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft 365 DefenderΓÇÖs public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide), allowing customers to export their security alerts data into other systems using one API. | ++Learn more about [handling alerts in Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide). ++## Next steps ++[Security alerts - a reference guide](alerts-reference.md) |
defender-for-cloud | Connect Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md | Title: Connect your Azure subscriptions description: Learn how to connect your Azure subscriptions to Microsoft Defender for Cloud Previously updated : 07/10/2023 Last updated : 11/02/2023 Microsoft Defender for Cloud is a cloud-native application protection platform ( - A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads -Defender for Cloud includes Foundational CSPM capabilities for free, complemented by additional paid plans required to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) for free. You can add additional paid plans to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Defender for Cloud helps you find and fix security vulnerabilities. Defender for Cloud also applies access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. If you want to disable any of the plans, toggle the individual plan to **off**. > [!TIP] > To enable Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md). +## Integrate with Microsoft 365 Defender ++When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed. ++The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface. ++Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md). + ## Next steps In this guide, you enabled Defender for Cloud on your Azure subscription. The next step is to set up your hybrid and multicloud environments. |
defender-for-cloud | Data Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md | description: Learn how data is managed and safeguarded in Microsoft Defender for Previously updated : 07/18/2023 Last updated : 11/02/2023 # Microsoft Defender for Cloud data security Customers can access Defender for Cloud related data from the following data str | [Azure Monitor logs](../azure-monitor/data-platform.md) | All security alerts. | | [Azure Resource Graph](../governance/resource-graph/overview.md) | Security alerts, security recommendations, vulnerability assessment results, secure score information, status of compliance checks, and more. | | [Microsoft Defender for Cloud REST API](/rest/api/defenderforcloud/) | Security alerts, security recommendations, and more. |- > [!NOTE] > If there are no Defender plans enabled on the subscription, data will be removed from Azure Resource Graph after 30 days of inactivity in the Microsoft Defender for Cloud portal. After interaction with artifacts in the portal related to the subscription, the data should be visible again within 24 hours. +## Defender for Cloud and Microsoft Defender 365 Defender integration ++When you enable any of Defender for Cloud's paid plans you automatically gain all of the benefits of Microsoft 365 Defender. Information from Defender for Cloud will be shared with Microsoft 365 Defender. This data may contain customer data and will be stored according to [Microsoft 365 data handling guidelines](/microsoft-365/security/defender/data-privacy?view=o365-worldwide). + ## Next steps In this document, you learned how data is managed and safeguarded in Microsoft Defender for Cloud. |
defender-for-cloud | Defender For Cloud Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md | Title: Defender for Cloud glossary description: The glossary provides a brief description of important Defender for Cloud platform terms and concepts. Previously updated : 07/18/2023 Last updated : 11/08/2023 Azure Security Benchmark provides recommendations on how you can secure your clo ### **Attack Path Analysis** -A graph-based algorithm that scans the cloud security graph, exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. See [What is attack path analysis?](concept-attack-path.md#what-is-attack-path-analysis). +A graph-based algorithm that scans the cloud security graph, exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. See [What is attack path analysis?](concept-attack-path.md#what-is-attack-path-analysis). ### **Auto-provisioning** Data-aware security posture automatically discovers datastores containing sensit ### Defender agent -The DaemonSet that is deployed on each node, collects signals from hosts using eBPF technology, and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Architecture for each Kubernetes environment](defender-for-containers-architecture.md#architecture-for-each-kubernetes-environment). +The DaemonSet that is deployed on each node, collects signals from hosts using eBPF technology, and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It's deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Architecture for each Kubernetes environment](defender-for-containers-architecture.md#architecture-for-each-kubernetes-environment). ### **DDOS Attack** Distributed denial-of-service, a type of attack where an attacker sends more req ### **EASM** -External Attack Surface Management. See [EASM Overview](how-to-manage-attack-path.md#external-attack-surface-management-easm). +External Attack Surface Management. See [EASM Overview](concept-easm.md). ### **EDR** Microsoft Defender Vulnerability Management. Learn how to [enable vulnerability ### **MFA** -Multi-factor authentication, a process in which users are prompted during the sign-in process for an extra form of identification, such as a code on their cellphone or a fingerprint scan.[How it works: Azure Multi Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md). +Multifactor authentication, a process in which users are prompted during the sign-in process for an extra form of identification, such as a code on their cellphone or a fingerprint scan.[How it works: Azure multifactor authentication](../active-directory/authentication/concept-mfa-howitworks.md). ### **MITRE ATT&CK** Security alerts are the notifications generated by Defender for Cloud and Defend ### **Security Initiative** -A collection of Azure Policy Definitions, or rules, that are grouped together towards a specific goal or purpose. [What are security policies, initiatives, and recommendations?](security-policy-concept.md) +A collection of Azure Policy Definitions, or rules that are grouped together towards a specific goal or purpose. [What are security policies, initiatives, and recommendations?](security-policy-concept.md) ### **Security Policy** |
defender-for-cloud | Defender For Cloud Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md | Title: What is Microsoft Defender for Cloud?- description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads. Previously updated : 07/24/2023 Last updated : 11/02/2023 # What is Microsoft Defender for Cloud? Microsoft Defender for Cloud is a cloud-native application protection platform ( - A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads -![Diagram that shows the core functionality of Microsoft Defender for Cloud.](media/defender-for-cloud-introduction/defender-for-cloud-pillars.png) > [!NOTE] > For Defender for Cloud pricing information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +When you [enable Defender for Cloud on your](connect-azure-subscription.md), you'll automatically gain access to Microsoft 365 Defender. ++The Microsoft 365 Defender portal provides richer context to investigations that span cloud resources, devices, and identities. In addition, security teams are able to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment, through the immediate correlation of all alerts and incidents, including cloud alerts and incidents. ++You can learn more about the [integration between Microsoft Defender for Cloud and Microsoft 365 Defender](concept-integration-365.md). ++ ## Secure cloud applications Defender for Cloud helps you to incorporate good security practices early during the software development process, or DevSecOps. You can protect your code management environments and your code pipelines, and get insights into your development environment security posture from a single location. Defender for Cloud empowers security teams to manage DevOps security across multi-pipeline environments. TodayΓÇÖs applications require security awareness at the code, infrastructure, a ## Improve your security posture -The security of your cloud and on-premises resources depends on proper configuration and deployment. Defender for Cloud recommendations identify the steps that you can take to secure your environment. +The security of your cloud and on-premises resources depends on proper configuration and deployment. Defender for Cloud recommendations identifies the steps that you can take to secure your environment. Defender for Cloud includes Foundational CSPM capabilities for free. You can also enable advanced CSPM capabilities by enabling the Defender CSPM plan. Defender for Cloud includes Foundational CSPM capabilities for free. You can als | [Data-aware Security Posture](concept-data-security-posture.md) | Data-aware security posture automatically discovers datastores containing sensitive data, and helps reduce risk of data breaches. | [Enable data-aware security posture](data-security-posture-enable.md) | Defender CSPM or Defender for Storage | | [Attack path analysis](concept-attack-path.md#what-is-attack-path-analysis) | Model traffic on your network to identify potential risks before you implement changes to your environment. | [Build queries to analyze paths](how-to-manage-attack-path.md) | Defender CSPM | | [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer) | A map of your cloud environment that lets you build queries to find security risks. | [Build queries to find security risks](how-to-manage-cloud-security-explorer.md) | Defender CSPM |-| [Security governance](governance-rules.md#building-an-automated-process-for-improving-security-with-governance-rules) | Drive security improvements through your organization by assigning tasks to resource owners and tracking progress in aligning your security state with your security policy. | [Define governance rules](governance-rules.md#defining-governance-rules-to-automatically-set-the-owner-and-due-date-of-recommendations) | Defender CSPM | +| [Security governance](governance-rules.md) | Drive security improvements through your organization by assigning tasks to resource owners and tracking progress in aligning your security state with your security policy. | [Define governance rules](governance-rules.md) | Defender CSPM | | [Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) | Provide comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP. | [Review your Permission Creep Index (CPI)](other-threat-protections.md#entra-permission-management-formerly-cloudknox) | Defender CSPM | ## Protect cloud workloads |
defender-for-cloud | Defender For Containers Vulnerability Assessment Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md | Container vulnerability assessment powered by Qualys has the following capabilit | [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainerRegistryRecommendationDetailsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648)| Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 | | [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c)ΓÇ»| Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c | -- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).+- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md). - **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get). - **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). |
defender-for-cloud | Exempt Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md | -# Exempt resources from recommendations in Defender for Cloud +# Exempt resources from recommendations When you investigate security recommendations in Microsoft Defender for Cloud, you usually review the list of affected resources. Occasionally, a resource will be listed that you feel shouldn't be included. Or a recommendation will show in a scope where you feel it doesn't belong. For example, a resource might have been remediated by a process not tracked by Defender for Cloud, or a recommendation might be inappropriate for a specific subscription. Or perhaps your organization has decided to accept the risks related to the specific resource or recommendation. In such cases, you can create an exemption to: For the scope you need, you can create an exemption rule to: -- Mark a specific **recommendation** or as "mitigated" or "risk accepted" for one or more subscriptions, or for an entire management group.+- Mark a specific **recommendation** as "mitigated" or "risk accepted" for one or more subscriptions, or for an entire management group. - Mark **one or more resources** as "mitigated" or "risk accepted" for a specific recommendation. ## Before you start -This feature is in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]. This is a premium Azure Policy capability that's offered at no more cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. [Review Azure cloud support](support-matrix-cloud-environment.md). +This feature is in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] This is a premium Azure Policy capability that's offered at no additional cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. - You need the following permissions to make exemptions: - **Owner** or **Security Admin** or **Resource Policy Contributor** to create an exemption To create an exemption rule: 1. In the Defender for Cloud portal, open the **Recommendations** page, and select the recommendation you want to exempt. -1. From the toolbar at the top of the page, select **Exempt**. +1. In **Take action**, select **Exempt**. :::image type="content" source="media/exempt-resource/exempting-recommendation.png" alt-text="Create an exemption rule for a recommendation to be exempted from a subscription or management group."::: After creating the exemption it can take up to 30 minutes to take effect. After - If you've exempted specific resources, they'll be listed in the **Not applicable** tab of the recommendation details page. - If you've exempted a recommendation, it will be hidden by default on Defender for Cloud's recommendations page. This is because the default options of the **Recommendation status** filter on that page are to exclude **Not applicable** recommendations. The same is true if you exempt all recommendations in a security control. - :::image type="content" source="media/exempt-resource/recommendations-filters-hiding-not-applicable.png" alt-text="Screenshot showing default filters on Microsoft Defender for Cloud's recommendations page hide the not applicable recommendations and security controls." lightbox="media/exempt-resource/recommendations-filters-hiding-not-applicable.png"::: ## Next steps -[Review recommendations](review-security-recommendations.md) in Defender for Cloud. +[Review exempted resources](review-exemptions.md) in Defender for Cloud. |
defender-for-cloud | Governance Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md | Title: Driving your organization to remediate security issues with recommendation governance -description: Learn how to assign owners and due dates to security recommendations and create rules to automatically assign owners and due dates + Title: Drive remediation of security recommendations with governance rules in Microsoft Defender for Cloud +description: Learn how to drive remediation of security recommendations with governance rules in Microsoft Defender for Cloud Previously updated : 01/23/2023 Last updated : 10/29/2023 -# Drive remediation with security governance +# Drive remediation with governance rules -Security teams are responsible for improving the security posture of their organizations but they might not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization. +While the security team is responsible for improving the security posture, team members might not actually implement security recommendations. -Stay on top of the progress on the recommendations in the security posture. Weekly email notifications to the owners and managers make sure that they take timely action on the recommendations that can improve your security posture and recommendations. +Using governance rules driven by the security team helps you to drive accountability and an SLA around the remediation process. -You can learn more by watching this video from the Defender for Cloud in the Field video series: +To learn more, watch [this episode](episode-fifteen.md) of the Defender for Cloud in the Field video series. -- [Remediate Security Recommendations with Governance](episode-fifteen.md)+## Governance rules -## Building an automated process for improving security with governance rules +You can define rules that assign an owner and a due date for addressing recommendations for specific resources. This provides resource owners with a clear set of tasks and deadlines for remediating recommendations. -To make sure your organization is systematically improving its security posture, you can define rules that assign an owner and set the due date for resources in the specified recommendations. That way resource owners have a clear set of tasks and deadlines for remediating recommendations. +For tracking, you can review the progress of the remediation tasks by subscription, recommendation, or owner so you can follow up with tasks that need more attention. -You can then review the progress of the tasks by subscription, recommendation, or owner so you can follow up with tasks that need more attention. +- Governance rules can identify resources that require remediation according to specific recommendations or severities. +- The rule assigns an owner and due date to ensure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date. +- The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. +- For example, if the rule identifies the resource on March 1 and the remediation timeframe is 14 days, March 15 is the due date. +- You can apply a grace period so that the resources given a due date don't affect your secure score. +- You can also set the owner of the resources that are affected by the specified recommendations. +- In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag. +- The owner is shown as unspecified when the owner wasn't found on the resource, the associated resource group, or the associated subscription based on the specified tag. +- By default, email notifications are sent to the resource owners weekly to provide a list of the on time and overdue tasks. +- If an email for the owner's manager is found in the organizational Microsoft Entra ID, the owner's manager receives a weekly email showing any overdue recommendations by default. +- Conflicting rules are applied in priority order. For example, rules on a management scope (Azure management groups, AWS accounts and GCP organizations), take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects). -### Availability +## Before you begin -|Aspect|Details| -|-|:-| -|Release state:|General availability (GA)| -|Prerequisite: | Requires the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) to be enabled.| -|Required roles and permissions:|Azure - **Contributor**, **Security Admin**, or **Owner** on the subscription<br>AWS, GCP ΓÇô **Contributor**, **Security Admin**, or **Owner** on the connector| -|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts| +- To use governance rules, the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) must be enabled. +- You need **Contributor**, **Security Admin**, or **Owner** permissions on Azure subscriptions. +- For AWS accounts and GCP projects, you need **Contributor**, **Security Admin**, or **Owner** permissions on the Defender for Cloud AWS/GCP connectors. -> [!NOTE] -> Starting January 1, 2023, governance capabilities will require Defender Cloud Security Posture Management (CSPM) plan enablement. -> Customers deciding to keep Defender CSPM plan off on scopes with governance content: -> -> - Existing assignments remain as is and continue to work with no customization option or ability to create new ones. -> - Existing rules will remain as is but wonΓÇÖt trigger new assignments creation. -### Defining governance rules to automatically set the owner and due date of recommendations +## Define a governance rule -Governance rules can identify resources that require remediation according to specific recommendations or severities. The rule assigns an owner and due date to ensure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date. --The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. For example, if the rule identifies the resource on March 1 and the remediation timeframe is 14 days, March 15 is the due date. You can apply a grace period so that the resources that 's given a due date don't affect your secure score until they're overdue. --You can also set the owner of the resources that are affected by the specified recommendations. In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag. --The owner is shown as unspecified when the owner wasn't found on the resource, the associated resource group, or the associated subscription based on the specified tag. ---By default, email notifications are sent to the resource owners weekly to provide a list of the on time and overdue tasks. If an email for the owner's manager is found in the organizational Microsoft Entra ID, the owner's manager receives a weekly email showing any overdue recommendations by default. ---To define a governance rule that assigns an owner and due date: --1. Navigate to **Environment settings** > **Governance rules**. +Define a governance rule as follows. +1. In Defender for Cloud, open the **Environment settings** page, and select **Governance rules**. 1. Select **Create governance rule**.+1. In **Create governance rule** > **General details**, specify a rule name, and the scope in which the rule applies. ++ - Rules for management scope (Azure management groups, AWS master accounts, GCP organizations) are applied prior to the rules on a single scope. + - You can define exclusions within the scope as needed. -1. Enter a name for the rule. -1. Select a scope to apply the rule to and use exclusions if needed. Rules for management scope (Azure management groups, AWS master accounts, GCP organizations) are applied prior to the rules on a single scope. +1. Priority is assigned automatically. Rules are run in priority order from the highest (1) to the lowest (1000). +1. Specify a description to help you identify the rule. Then select **Next**. -1. Priority is assigned automatically after scope selection. You can override this field if needed. + :::image type="content" source="./media/governance-rules/add-rule.png" alt-text="Screenshot of page for adding a governance rule." lightbox="media/governance-rules/add-rule.png"::: -1. Select the recommendations that the rule applies to, either: +1. In the **Conditions** tab, specify how recommendations are impacted by the rule. - **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned.- - **By specific recommendations** - Select the specific recommendations that the rule applies to. -1. Set the owner to assign to the recommendations either: + - **By specific recommendations** - Select the specific built-in or custom recommendations that the rule applies to. +1. In **Set owner**, specify who's responsible for fixing recommendations covered by the rule. - **By resource tag** - Enter the resource tag on your resources that defines the resource owner. - **By email address** - Enter the email address of the owner to assign to the recommendations.-1. Set the **remediation timeframe**, which is the time between when the resources are identified to require remediation and the time that the remediation is due. -1. If you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**. -1. If you don't want either the owner or the owner's manager to receive weekly emails, clear the notification options. -1. Select **Create**. --If there are existing recommendations that match the definition of the governance rule, you can either: --- Assign an owner and due date to recommendations that don't already have an owner or due date.-- Overwrite the owner and due date of existing recommendations.--> [!NOTE] -> When you delete or disable a rule, all existing assignments and notifications will remain. -> [!TIP] -> Here are some sample use-cases for the at-scale experience: -> -> - View and manage all governance rules effective in the organization using a single page. -> - Create and apply rules on multiple scopes at once using management scopes cross cloud. -> - Check effective rules on selected scope using the scope filter. --To view the effect of rules on a specific scope, use the Scope filter to select a specific scope. --Conflicting rules are applied in priority order. For example, rules on a management scope (Azure management groups, AWS accounts and GCP organizations), take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects). --## Manually assigning owners and due dates for recommendation remediation --For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that 's given a due date don't affect your secure score unless they become overdue. --To manually assign owners and due dates to recommendations: +1. In **Set remediation timeframe**, specify the time that can elapse between when resources are identified as requiring remediation, and the time that the remediation is due. +1. For recommendations issued by MCSB, if you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**. +1. By default owners and their managers are notified weekly about open and overdue tasks. If you don't want them to receive these weekly emails, clear the notification options. +1. Select **Create**. -1. Go to the list of recommendations: - - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve. - - Go to **Recommendations** in the Defender for Cloud menu. -1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score. + :::image type="content" source="./media/governance-rules/create-rule-conditions.png" alt-text="Screenshot of page for adding conditions for a governance rule." lightbox="media/governance-rules/create-rule-conditions.png"::: - > [!TIP] - > You can also use the search box and filters above the list of recommendations to find specific recommendations. -1. Select a recommendation to see the affected resources. -1. For any resource that doesn't have an owner or due date, select the resources and select **Assign owner**. -1. Enter the email address of the owner that needs to make the changes that remediate the recommendation for those resources. -1. Select the date by which to remediate the recommendation for the resources. -1. You can select **Apply grace period** to keep the resource from affecting the secure score until it's overdue. -1. Select **Save**. +- If there are existing recommendations that match the definition of the governance rule, you can either: -The recommendation is now shown as assigned and on time. + - Assign an owner and due date to recommendations that don't already have an owner or due date. + - Overwrite the owner and due date of existing recommendations. +- When you delete or disable a rule, all existing assignments and notifications remain. -## Tracking the status of recommendations for further action -After you define governance rules, you'll want to review the progress that the owners are making in remediating the recommendations. +## View effective rules -You can track the assigned and overdue recommendations in: +You can view the effect of government rules in your environment. -- The security posture shows the number of unassigned and overdue recommendations.+1. In the Defender for Cloud portal, open the **Governance rules** page. +1. Review governance rules. The default list shows all the governance rules applicable in your environment. +1. You can search for rules, or filter rules. + - Filter on **Environment** to identify rules for Azure, AWS, and GCP. + - Filter on rule name, owner, or time between the recommendation being issued and due date. + - Filter on **Grace period** to find MCSB recommendations that won't affect your secure score. + - Identify by status. - :::image type="content" source="./media/governance-rules/governance-in-security-posture.png" alt-text="Screenshot of governance status in the security posture."::: + :::image type="content" source="./media/governance-rules/view-filter-rules.png" alt-text="Screenshot of page for viewing and filtering rules." lightbox="media/governance-rules/view-filter-rules.png"::: -- The list of recommendations shows the governance status of each recommendation. - :::image type="content" source="./media/governance-rules/governance-in-recommendations.png" alt-text="Screenshot of recommendations with their governance status." lightbox="media/governance-rules/governance-in-recommendations.png"::: -- The governance report in the governance rules settings lets you drill down into recommendations by rule and owner. - :::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png"::: -### Tracking progress by rule with the governance report +## Review the governance report The governance report lets you select subscriptions that have governance rules and, for each rule and owner, shows you how many recommendations are completed, on time, overdue, or unassigned. -> [!NOTE] -> Manual assignments will not appear on this report. To see all assignments by owner, use the Owner tab on the Security Posture page. +1. In Defender for Cloud > **Environment settings** > **Governance rules**, select **Governance report**. +1. In **Governance**, select a subscription. -**To review the status of the recommendations in a rule**: + :::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png"::: -1. In **Recommendations**, select **Governance report**. -1. Select the subscriptions that you want to review. -1. Select the rules that you want to see details about. +1. From the governance report, you drill down into recommendations by rule and owner. -You can see the list of owners and recommendations for the selected rules, and their status. --**To see the list of recommendations for each owner**: --1. Select **Security posture**. -1. Select the **Owner** tab to see the list of owners and the number of overdue recommendations for each owner. -- - Hover over the (i) in the overdue recommendations to see the breakdown of overdue recommendations by severity. -- - If the owner email address is found in the organizational Microsoft Entra ID, you'll see the full name and picture of the owner. --1. Select **View recommendations** to go to the list of recommendations associated with the owner. ## Next steps -In this article, you learned how to set up a process for assigning owners and due dates to tasks so that owners are accountable for taking steps to improve your security posture. --Check out how owners can [set ETAs for tasks](review-security-recommendations.md#manage-the-owner-and-eta-of-recommendations-that-are-assigned-to-you) so that they can manage their progress. -Learn how to [Implement security recommendations in Microsoft Defender for Cloud](implement-security-recommendations.md). +Learn how to [Implement security recommendations](implement-security-recommendations.md). |
defender-for-cloud | How To Manage Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md | Title: Identify and remediate attack paths- -description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. + Title: Identify and remediate attack paths in Microsoft Defender for Cloud +description: Learn how to identify and remediate attack paths in Microsoft Defender for Cloud -+ Last updated 11/01/2023 The attack path page shows you an overview of all of your attack paths. You can :::image type="content" source="media/concept-cloud-map/attack-path-homepage.png" alt-text="Screenshot of a sample attack path homepage." lightbox="media/concept-cloud-map/attack-path-homepage.png"::: -On this page you can organize your attack paths based on name, environment, paths count, risk categories. +On this page you can organize your attack paths based on risk level, name, environment, paths count, risk factors, entry point, target, the number of affected resources, or the number of active recommendations. -For each attack path, you can see all of risk categories and any affected resources. +For each attack path, you can see all of risk factors and any affected resources. -The potential risk categories include credentials exposure, compute abuse, data exposure, subscription and account takeover. +The potential risk factors include credentials exposure, compute abuse, data exposure, subscription and account takeover. Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md). You can use Attack path analysis to locate the biggest risks to your environmen 1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**. - :::image type="content" source="media/how-to-manage-attack-path/attack-path-blade.png" alt-text="Screenshot that shows the attack path analysis blade on the main screen." lightbox="media/how-to-manage-attack-path/attack-path-blade.png"::: + :::image type="content" source="media/how-to-manage-attack-path/attack-path-blade.png" alt-text="Screenshot that shows the attack path analysis page on the main screen." lightbox="media/how-to-manage-attack-path/attack-path-blade.png"::: 1. Select an attack path. - :::image type="content" source="media/how-to-manage-cloud-map/attack-path.png" alt-text="Screenshot that shows a sample of attack paths." lightbox="media/how-to-manage-cloud-map/attack-path.png" ::: -- > [!NOTE] - > An attack path might have more than one path that is at risk. The path count will tell you how many paths need to be remediated. If the attack path has more than one path, you will need to select each path within that attack path to remediate all risks. - 1. Select a node. :::image type="content" source="media/how-to-manage-cloud-map/node-select.png" alt-text="Screenshot of the attack path screen that shows you where the nodes are located for selection." lightbox="media/how-to-manage-cloud-map/node-select.png"::: Once an attack path is resolved, it can take up to 24 hours for an attack path t Attack path analysis also gives you the ability to see all recommendations by attack path without having to check each node individually. You can resolve all recommendations without having to view each node individually. +The remediation path contains two types of recommendation: ++- **Recommendations** - Recommendations that mitigate the attack path. +- **Additional recommendations** - Recommendations that reduce the exploitation risks, but donΓÇÖt mitigate the attack path. + **To resolve all recommendations**: 1. Sign in to the [Azure portal](https://portal.azure.com). Attack path analysis also gives you the ability to see all recommendations by at 1. Select an attack path. -1. Select **Recommendations**. +1. Select **Remediation**. :::image type="content" source="media/how-to-manage-cloud-map/bulk-recommendations.png" alt-text="Screenshot that shows where to select on the screen to see the attack paths full list of recommendations." lightbox="media/how-to-manage-cloud-map/bulk-recommendations.png"::: securityresources ``` **Get all instances for a specific attack path**:-For example, ΓÇÿInternet exposed VM with high severity vulnerabilities and read permission to a Key VaultΓÇÖ. +For example, `Internet exposed VM with high severity vulnerabilities and read permission to a Key Vault`. ```kusto securityresources The following table lists the data fields returned from the API response: |--|--| | ID | The Azure resource ID of the attack path instance| | Name | The Unique identifier of the attack path instance|-| Type | The Azure resource type, always equals ΓÇ£microsoft.security/attackpathsΓÇ¥| +| Type | The Azure resource type, always equals `microsoft.security/attackpaths`| | Tenant ID | The tenant ID of the attack path instance | | Location | The location of the attack path | | Subscription ID | The subscription of the attack path | The following table lists the data fields returned from the API response: | Properties.graphComponent.connections | List of connections graph components related to the attack path | | Properties.AttackPathID | The unique identifier of the attack path instance | -## External attack surface management (EASM) --An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. An organization's attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect. --While you're [investigating and remediating an attack path](#investigate-and-remediate-attack-paths), you can also view your EASM if it's available, and if you've enabled Defender EASM to your subscription. --> [!NOTE] -> To manage your EASM, you must [deploy the Defender EASM Azure resource](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md) to your subscription. Defender EASM has its own cost and is separate from Defender for Cloud. To learn more about Defender for EASM pricing options, you can check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-external-attack-surface-management/). --**To manage your EASM**: --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**. --1. Select an attack path. --1. Select a resource. --1. Select **Insights**. --1. Select **Open EASM**. -- :::image type="content" source="media/how-to-manage-attack-path/open-easm.png" alt-text="Screenshot that shows you where on the screen you need to select open Defender EASM from." lightbox="media/how-to-manage-attack-path/easm-zoom.png"::: --1. Follow the [Using and managing discovery](../external-attack-surface-management/using-and-managing-discovery.md) instructions. - ## Next Steps Learn how to [build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md). |
defender-for-cloud | How To Manage Cloud Security Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md | Title: Build queries with cloud security explorer- -description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. + Title: Build queries with cloud security explorer in Microsoft Defender for Cloud +description: Learn how to build queries with cloud security explorer in Microsoft Defender for Cloud Last updated 11/01/2023 |
defender-for-cloud | How To Test Attack Path And Security Explorer With Vulnerable Container Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md | Title: How-to test the attack path and cloud security explorer using a vulnerable container image in Microsoft Defender for Cloud -description: Learn how to test the attack path and security explorer using a vulnerable container image + Title: Test attack paths and cloud security explorer in Microsoft Defender for Cloud +description: Learn how to test attack paths and cloud security explorer in Microsoft Defender for Cloud Previously updated : 07/17/2023 Last updated : 11/08/2023 -# Testing the Attack Path and Security Explorer using a vulnerable container image +# Test attack paths and cloud security explorer -## Observing potential threats in the attack path experience -Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. +Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. -Explore and investigate [attack paths](how-to-manage-attack-path.md) by sorting them based on name, environment, path count, and risk categories. Explore cloud security graph Insights on the resource. Examples of Insight types are: +Explore and investigate [attack paths](how-to-manage-attack-path.md) by sorting them based on risk level, name, environment, and risk factors, entry point, target, affected resources and active recommendations. Explore cloud security graph Insights on the resource. Examples of Insight types are: - Pod exposed to the internet - Privileged container You can build queries in one of the following ways: ### Find the security issue under attack paths -1.Go to **Recommendations** in the Defender for Cloud menu. -1. Select the **Attack Path** link to open the attack paths view. +1. Sign in to the [Azure portal](https://portal.azure.com). - :::image type="content" source="media/how-to-test-attack-path/attack-path.png" alt-text="Screenshot of showing where to select Attack Path." lightbox="media/how-to-test-attack-path/attack-path.png"::: +1. Navigate to **Attack path analysis**. -1. Locate the entry that details this security issue under ΓÇ£Internet exposed Kubernetes pod is running a container with high severity vulnerabilities.ΓÇ¥ +1. Select an attack path. - :::image type="content" source="media/how-to-test-attack-path/attack-path-kubernetes-pods-vulnerabilities.png" alt-text="Screenshot showing the security issue details." lightbox="media/how-to-test-attack-path/attack-path-kubernetes-pods-vulnerabilities.png"::: +1. Locate the entry that details this security issue under `Internet exposed Kubernetes pod is running a container with high severity vulnerabilities`. ### Explore risks with cloud security explorer templates |
defender-for-cloud | Implement Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md | Title: Implement security recommendations -description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies. + Title: Remediate security recommendations in Microsoft Defender for Cloud +description: Learn how to remediate security recommendations in Microsoft Defender for Cloud Previously updated : 10/20/2022 Last updated : 11/08/2023 -# Implement security recommendations in Microsoft Defender for Cloud +# Remediate security recommendations -Recommendations give you suggestions on how to better secure your resources. You implement a recommendation by following the remediation steps provided in the recommendation. +Resources and workloads protected by Microsoft Defender for Cloud are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture. -<a name="remediation-steps"></a> +This article describes how to remediate security recommendations in your Defender for Cloud deployment using the latest version of the portal experience. -## Remediation steps +## Before you start -After reviewing all the recommendations, decide which one to remediate first. We recommend that you prioritize the security controls with the highest potential to increase your secure score. +Before you attempt to remediate a recommendation you should review it in detail. Learn how to [review security recommendations](review-security-recommendations.md). -1. From the list, select a recommendation. +## Group recommendations by risk level -1. Follow the instructions in the **Remediation steps** section. Each recommendation has its own set of instructions. The following screenshot shows remediation steps for configuring applications to only allow traffic over HTTPS. +Before you start remediating, we recommend grouping your recommendations by risk level in order to remediate the most critical recommendations first. - :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="Manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png"::: +1. Sign in to the [Azure portal](https://portal.azure.com). -1. Once completed, a notification appears informing you whether the issue is resolved. +1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**. ++1. Select **Group by** > **Primary grouping** > **Risk level** > **Apply**. -## Fix button + :::image type="content" source="media/implement-security-recommendations/group-by-risk-level.png" alt-text="Screenshot of the recommendations page that shows how to group your recommendations." lightbox="media/implement-security-recommendations/group-by-risk-level.png"::: -To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option. + Recommendations are displayed in groups of risk levels. -**Fix** helps you quickly remediate a recommendation on multiple resources. +1. Review critical and other recommendations to understand the recommendation and remediation steps. Use the graph to understand the risk to your business, including which resources are exploitable, and the effect that the recommendation has on your business. -To implement a **Fix**: -1. From the list of recommendations that have the **Fix** action icon :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation. +## Remediate recommendations - :::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png"::: +After reviewing recommendations by risk, decide which one to remediate first. -1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Fix**. +In addition to risk level, we recommend that you prioritize the security controls in the default [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) standard in Defender for Cloud, since these controls affect your [secure score](secure-score-security-controls.md). - > [!NOTE] - > Some of the listed resources might be disabled, because you don't have the appropriate permissions to modify them. -1. In the confirmation box, read the remediation details and implications. +1. In the **Recommendations** page, select the recommendation you want to remediate. - ![Quick fix.](./media/implement-security-recommendations/microsoft-defender-for-cloud-quick-fix-view.png) +1. In the recommendation details page, select **Take action** > **Remediate**. +1. Follow the remediation instructions. - > [!NOTE] - > The implications are listed in the grey box in the **Fixing resources** window that opens after clicking **Fix**. They list what changes happen when proceeding with the **Fix**. + As an example, the following screenshot shows remediation steps for configuring applications to only allow traffic over HTTPS. -1. Insert the relevant parameters if necessary, and approve the remediation. + :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="This screenshots shows manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png"::: ++1. Once completed, a notification appears informing you whether the issue is resolved. - > [!NOTE] - > It can take several minutes after remediation completes to see the resources in the **Healthy resources** tab. To view the remediation actions, check the [activity log](#activity-log). +## Use the Fix option -1. Once completed, a notification appears informing you if the remediation succeeded. +To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option to help you quickly remediate a recommendation on multiple resources. -<a name="activity-log"></a> +1. In the **Recommendations** page, select a recommendation that shows the **Fix** action icon: :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::. -## Fix actions logged to the activity log + :::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="This screenshot shows recommendations with the Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png"::: -The remediation operation uses a template deployment or REST API `PATCH` request to apply the configuration on the resource. These operations are logged in [Azure activity log](../azure-monitor/essentials/activity-log.md). +1. In **Take action**, select **Fix**. +1. Follow the rest of the remediation steps. +++After remediation completes, it can take several minutes to see the resources appear in the **Findings** tab when the status is filtered to view **Healthy** resources. ## Next steps -In this document, you were shown how to remediate recommendations in Defender for Cloud. To learn how recommendations are defined and selected for your environment, see the following page: +[Learn about](governance-rules.md) using governance rules in your remediation processes. + -- [What are security policies, initiatives, and recommendations?](security-policy-concept.md) |
defender-for-cloud | Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md | The following table displays roles and allowed actions in Defender for Cloud. | Edit security policy | - | Γ£ö | - | - | Γ£ö | | Enable / disable Microsoft Defender plans | - | Γ£ö | - | Γ£ö | Γ£ö | | Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |-| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö | +| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md)) | - | - | Γ£ö | Γ£ö | Γ£ö | | View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Exempt security recommendations | - | - | Γ£ö | Γ£ö | Γ£ö | |
defender-for-cloud | Prevent Misconfigurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prevent-misconfigurations.md | - Title: How to prevent misconfigurations -description: Learn how to use Defender for Cloud's 'Enforce' and 'Deny' options on the recommendations details pages - Previously updated : 07/24/2023---# Prevent misconfigurations with Enforce/Deny recommendations --Security misconfigurations are a major cause of security incidents. Defender for Cloud can help *prevent* misconfigurations of new resources regarding specific recommendations. --This feature can help keep your workloads secure and stabilize your secure score. --Enforcing a secure configuration, based on a specific recommendation, is offered in two modes: --- Using the **Deny** effect of Azure Policy, you can stop unhealthy resources from being created.--- Using the **Enforce** option, you can take advantage of Azure Policy's **DeployIfNotExist** effect and automatically remediate non-compliant resources upon creation.--The ability to secure configurations can be found at the top of the resource details page for selected security recommendations (see [Recommendations with deny/enforce options](#recommendations-with-denyenforce-options)). --## Prevent resource creation --1. Open the recommendation that your new resources must satisfy, and select the **Deny** button at the top of the page. -- :::image type="content" source="./media/implement-security-recommendations/recommendation-deny-button.png" alt-text="Recommendation page with Deny button highlighted."::: -- The configuration pane opens listing the scope options. --1. Set the scope by selecting the relevant subscription or management group. -- > [!TIP] - > You can use the three dots at the end of the row to change a single subscription, or use the checkboxes to select multiple subscriptions or groups then select **Change to Deny**. -- :::image type="content" source="./media/implement-security-recommendations/recommendation-prevent-resource-creation.png" alt-text="Setting the scope for Azure Policy deny."::: --## Enforce a secure configuration --1. Open the recommendation that you'll deploy a template deployment for if new resources don't satisfy it, and select the **Enforce** button at the top of the page. -- :::image type="content" source="./media/implement-security-recommendations/recommendation-enforce-button.png" alt-text="Recommendation page with Enforce button highlighted."::: -- The configuration pane opens with all of the policy configuration options. -- :::image type="content" source="./media/implement-security-recommendations/recommendation-enforce-config.png" alt-text="Enforce configuration options."::: --1. Set the scope, assignment name, and other relevant options. --1. Select **Review + create**. --## Recommendations with deny/enforce options --These recommendations can be used with the **deny** option: ---These recommendations can be used with the **enforce** option: --- Auditing on SQL server should be enabled-- Azure Arc-enabled Kubernetes clusters should have Microsoft Defender for Cloud's extension installed-- Azure Backup should be enabled for virtual machines-- Microsoft Defender for App Service should be enabled-- Microsoft Defender for container registries should be enabled-- Microsoft Defender for Key Vault should be enabled-- Microsoft Defender for Kubernetes should be enabled-- Microsoft Defender for Resource Manager should be enabled-- Microsoft Defender for Servers should be enabled-- Microsoft Defender for Azure SQL Database servers should be enabled-- Microsoft Defender for SQL servers on machines should be enabled-- Microsoft Defender for SQL should be enabled for unprotected Azure SQL servers-- Microsoft Defender for Storage should be enabled-- Azure Policy Add-on for Kubernetes should be installed and enabled on your clusters-- Diagnostic logs in Azure Stream Analytics should be enabled-- Diagnostic logs in Batch accounts should be enabled-- Diagnostic logs in Data Lake Analytics should be enabled-- Diagnostic logs in Event Hub should be enabled-- Diagnostic logs in Key Vault should be enabled-- Diagnostic logs in Logic Apps should be enabled-- Diagnostic logs in Search services should be enabled-- Diagnostic logs in Service Bus should be enabled--## Next steps --[Automate responses to Microsoft Defender for Cloud triggers](workflow-automation.md) |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | Title: Connect your AWS account description: Defend your AWS resources by using Microsoft Defender for Cloud. Previously updated : 10/22/2023 Last updated : 11/02/2023 # Connect your AWS account to Microsoft Defender for Cloud To view all the active recommendations for your resources by resource type, use :::image type="content" source="./media/quickstart-onboard-aws/aws-resource-types-in-inventory.png" alt-text="Screenshot of AWS options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-aws/aws-resource-types-in-inventory.png"::: +## Integrate with Microsoft 365 Defender ++When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed. ++The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface. ++Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md). + ## Learn more Check out the following blogs: |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | To view all the active recommendations for your resources by resource type, use :::image type="content" source="./media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png" alt-text="Screenshot of GCP options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png"::: +## Integrate with Microsoft 365 Defender ++When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed. ++The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface. ++Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md). + ## Next steps Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud: |
defender-for-cloud | Quickstart Onboard Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md | Title: Connect on-premises machines description: Learn how to connect your non-Azure machines to Microsoft Defender for Cloud. Previously updated : 06/29/2023 Last updated : 11/02/2023 To verify that your machines are connected: ![Defender for Cloud icon for an Azure Arc-enabled server.](./media/quickstart-onboard-machines/arc-enabled-machine-icon.png) Azure Arc-enabled server +## Integrate with Microsoft 365 Defender ++When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed. ++The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface. ++Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md). + ## Clean up resources There's no need to clean up any resources for this article. |
defender-for-cloud | Regulatory Compliance Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md | Title: Regulatory compliance checks -description: 'Tutorial: Learn how to Improve your regulatory compliance using Microsoft Defender for Cloud.' + Title: Improve regulatory compliance in Microsoft Defender for Cloud +description: Learn how to improve regulatory compliance in Microsoft Defender for Cloud. Last updated 06/18/2023 -# Tutorial: Improve your regulatory compliance +# Improve regulatory compliance -Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards. +Microsoft Defender for Cloud helps you to meet regulatory compliance requirements by continuously assessing resources against compliance controls, and identifying issues that are blocking you from achieving a particular compliance certification. -When you enable Defender for Cloud on an Azure subscription, the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/), [PCI-DSS](https://www.pcisecuritystandards.org/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. +In the **Regulatory compliance** dashboard, you manage and interact with compliance standards. You can see which compliance standards are assigned, turn standards on and off for Azure, AWS, and GCP, review the status of assessments against standards, and more. -The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves. +## Integration with Purview -> [!TIP] -> Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate. When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard. Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [Multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud). --In this tutorial you'll learn how to: --> [!div class="checklist"] -> -> - Evaluate your regulatory compliance using the regulatory compliance dashboard -> - Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products -> - Improve your compliance posture by taking action on recommendations -> - Download PDF/CSV reports as well as certification reports of your compliance status -> - Setup alerts on changes to your compliance status -> - Export your compliance data as a continuous stream and as weekly snapshots +Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate. -If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. +When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard. -## Prerequisites +Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud). -To step through the features covered in this tutorial: -- [Enable enhanced security features](enable-enhanced-security.md). You can enable these for free for 30 days.-- You must be signed in with an account that has reader access to the policy compliance data. The **Reader** role for the subscription has access to the policy compliance data, but the **Security Reader** role doesn't. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned. -## Assess your regulatory compliance -The regulatory compliance dashboard shows your selected compliance standards with all their requirements, where supported requirements are mapped to applicable security assessments. The status of these assessments reflects your compliance with the standard. +## Before you start -Use the regulatory compliance dashboard to help focus your attention on the gaps in compliance with your chosen standards and regulations. This focused view also enables you to continuously monitor your compliance over time within dynamic cloud and hybrid environments. +- By default, when you enable Defender for Cloud on an Azure subscription, AWS account, or GCP plan, the MCSB plan is enabled +- You can add additional non-default compliance standards when at least one paid plan is enabled in Defender for Cloud. +- You must be signed in with an account that has reader access to the policy compliance data. The **Reader** role for the subscription has access to the policy compliance data, but the **Security Reader** role doesn't. At a minimum, you need to have **Resource Policy Contributor** and **Security Admin** roles assigned. -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Defender for Cloud** > **Regulatory compliance**. +## Assess regulatory compliance - The dashboard provides you with an overview of your compliance status and the set of supported compliance regulations. You'll see your overall compliance score, and the number of passing vs. failing assessments associated with each standard. +The **Regulatory compliance** dashboard shows which compliance standards are enabled. It shows the controls within each standard, and security assessments for those controls. The status of these assessments reflects your compliance with the standard. +The dashboard helps you to focus on gaps in standards, and to monitor compliance over time. - The following list has a numbered item that matches each location in the image above, and describes what is in the image: -- Select a compliance standard to see a list of all controls for that standard. (1)-- View the subscription(s) that the compliance standard is applied on. (2)-- Select a Control to see more details. Expand the control to view the assessments associated with the selected control. Select an assessment to view the list of resources associated and the actions to remediate compliance concerns. (3)-- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)-- In the Your Actions tab, you can see the automated and manual assessments associated to the control. (5)-- Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6)-- The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7)+1. In the Defender for Cloud portal open the **Regulatory compliance** page. -## Investigate regulatory compliance issues + :::image type="content" source="./media/regulatory-compliance-dashboard/compliance-drilldown.png" alt-text="Screenshot that shows the exploration of the details of compliance with a specific standard." lightbox="media/regulatory-compliance-dashboard/compliance-drilldown.png"::: -You can use the information in the regulatory compliance dashboard to investigate any issues that might be affecting your compliance posture. +1. Use the dashboard in accordance with the numbered items in the image. -**To investigate your compliance issues**: + - (1). Select a compliance standard to see a list of all controls for that standard. + - (2). View the subscriptions on which the compliance standard is applied. + - (3). Select and expand a control to view the assessments associated with it. Select an assessment to view the associated resources, and possible remediation actions. + - (4). Select **Control details** to view the **Overview**, **Your Actions**, and **Microsoft Actions** tabs. + - (5). In **Your Actions**, you can see the automated and manual assessments associated with the control. + - (6). Automated assessments show the number of failed resources and resource types, and link you directly to the remediation information. + - (7). Manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. -1. Sign in to the [Azure portal](https://portal.azure.com). +## Investigate issues -1. Navigate to **Defender for Cloud** > **Regulatory compliance**. +You can use information in the dashboard to investigate issues that might affect compliance with the standard. -1. Select a regulatory compliance standard. +1. In the Defender for Cloud portal, open **Regulatory compliance**. -1. Select a compliance control to expand it. +1. Select a regulatory compliance standard, and select a compliance control to expand it. 1. Select **Control details**. You can use the information in the regulatory compliance dashboard to investigat The regulatory compliance has both automated and manual assessments that might need to be remediated. Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard. -**To remediate an automated assessment**: --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Navigate to **Defender for Cloud** > **Regulatory compliance**. -1. Select a regulatory compliance standard. +1. In the Defender for Cloud portal, open **Regulatory compliance**. -1. Select a compliance control to expand it. +1. Select a regulatory compliance standard, and select a compliance control to expand it. 1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue. The regulatory compliance has both automated and manual assessments that might n 1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves. - > [!NOTE] - > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment. ++Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment. ## Remediate a manual assessment The regulatory compliance has automated and manual assessments that might need to be remediated. Manual assessments are assessments that require input from the customer to remediate them. -**To remediate a manual assessment**: --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Defender for Cloud** > **Regulatory compliance**. +1. In the Defender for Cloud portal, open **Regulatory compliance**. -1. Select a regulatory compliance standard. +1. Select a regulatory compliance standard, and select a compliance control to expand it. -1. Select a compliance control to expand it. --1. Under the Manual attestation and evidence section, select an assessment. +1. Under the **Manual attestation and evidence** section, select an assessment. 1. Select the relevant subscriptions. The regulatory compliance has automated and manual assessments that might need t ## Generate compliance status reports and certificates -- To generate a PDF report with a summary of your current compliance status for a particular standard, select **Download report**.+1. To generate a PDF report with a summary of your current compliance status for a particular standard, select **Download report**. The report provides a high-level summary of your compliance status for the selected standard based on Defender for Cloud assessments data. The report's organized according to the controls of that particular standard. The report can be shared with relevant stakeholders, and might provide evidence to internal and external auditors. :::image type="content" source="./media/regulatory-compliance-dashboard/download-report.png" alt-text="Screenshot that shows using the toolbar in Defender for Cloud's regulatory compliance dashboard to download compliance reports."::: -- To download Azure and Dynamics **certification reports** for the standards applied to your subscriptions, use the **Audit reports** option.+1. To download Azure and Dynamics **certification reports** for the standards applied to your subscriptions, use the **Audit reports** option. :::image type="content" source="media/release-notes/audit-reports-regulatory-compliance-dashboard.png" alt-text="Screenshot that shows using the toolbar in Defender for Cloud's regulatory compliance dashboard to download Azure and Dynamics certification reports."::: - Select the tab for the relevant reports types (PCI, SOC, ISO, and others) and use filters to find the specific reports you need: +1. Select the tab for the relevant reports types (PCI, SOC, ISO, and others) and use filters to find the specific reports you need: :::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard-ga.png" alt-text="Screenshot that shows filtering the list of available Azure Audit reports using tabs and filters."::: For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate. - > [!NOTE] - > When you download one of these certification reports, you'll be shown the following privacy notice: - > - > _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._ ++ When you download one of these certification reports, you'll be shown the following privacy notice: + + _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._ ### Check compliance offerings status Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform. -**To check the compliance offerings status**: --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Navigate to **Defender for Cloud** > **Regulatory compliance**. +1. In the Defender for Cloud portal, open **Regulatory compliance**. 1. Select **Compliance offerings**. Transparency provided by the compliance offerings (currently in preview), allows :::image type="content" source="media/regulatory-compliance-dashboard/search-service.png" alt-text="Screenshot of the compliance offering screen with the search bar highlighted." lightbox="media/regulatory-compliance-dashboard/search-service.png"::: -## Configure frequent exports of your compliance status data +## Continuously export compliance status If you want to track your compliance status with other monitoring tools in your environment, Defender for Cloud includes an export mechanism to make this straightforward. Configure **continuous export** to send select data to an Azure Event Hubs or a Log Analytics workspace. Learn more in [continuously export Defender for Cloud data](continuous-export.md). Use continuous export data to an Azure Event Hubs or a Log Analytics workspace: -- Export all regulatory compliance data in a **continuous stream**:+1. Export all regulatory compliance data in a **continuous stream**: :::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-stream.png" alt-text="Screenshot that shows how to continuously export a stream of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-stream.png"::: -- Export **weekly snapshots** of your regulatory compliance data:+1. Export **weekly snapshots** of your regulatory compliance data: :::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png" alt-text="Screenshot that shows how to continuously export a weekly snapshot of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png"::: > [!TIP]-> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance) +> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. -## Run workflow automations when there are changes to your compliance +## Trigger a workflow when assessments change Defender for Cloud's workflow automation feature can trigger Logic Apps whenever one of your regulatory compliance assessments changes state. For example, you might want Defender for Cloud to email a specific user when a c ## Next steps -In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory compliance dashboard to: --> [!div class="checklist"] -> -> - View and monitor your compliance posture regarding the standards and regulations that are important to you. -> - Improve your compliance status by resolving relevant recommendations and watching the compliance score improve. --The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multicloud environment. - To learn more, see these related pages: - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - Learn how to select which standards appear in your regulatory compliance dashboard. |
defender-for-cloud | Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md | description: A description of what's new and changed in Microsoft Defender for C Previously updated : 09/06/2023 Last updated : 11/08/2023 # Archive for what's new in Defender for Cloud? Microsoft Defender for Cloud helps security teams to be more productive at reduc - Automatically discover data resources across cloud estate and evaluate their accessibility, data sensitivity and configured data flows. -Continuously uncover risks to data breaches of sensitive data resources, exposure or attack paths that could lead to a data resource using a lateral movement technique.-- Detect suspicious activities that may indicate an ongoing threat to sensitive data resources.+- Detect suspicious activities that might indicate an ongoing threat to sensitive data resources. [Learn more](concept-data-security-posture.md) about data-aware security posture. According to the [2021 State of the Cloud report](https://info.flexera.com/CM-RE **Microsoft Defender for Cloud** is a Cloud Security Posture Management (CSPM) and cloud workload protection (CWP) solution that discovers weaknesses across your cloud configuration, helps strengthen the overall security posture of your environment, and protects workloads across multicloud and hybrid environments. +At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud** reflects the integrated capabilities of our security offering and our ability to support any cloud platform. ### Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2 You'll find these tactics wherever you access recommendation information: - **Recommendation details pages** show the mapping for all relevant recommendations: - :::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation."::: - - **The recommendations page in Defender for Cloud** has a new :::image type="icon" source="media/review-security-recommendations/tactics-filter-recommendations-page.png" border="false"::: filter to select recommendations according to their associated tactic: Learn more in [Review your security recommendations](review-security-recommendations.md). Learn more in [Identify vulnerable container images in your CI/CD workflows](def ### More Resource Graph queries available for some recommendations -All of Security Center's recommendations have the option to view the information about the status of affected resources using Azure Resource Graph from the **Open query**. For full details about this powerful feature, see [Review recommendation data in Azure Resource Graph Explorer (ARG)](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg). +All of Security Center's recommendations have the option to view the information about the status of affected resources using Azure Resource Graph from the **Open query**. For full details about this powerful feature, see [Review recommendation data in Azure Resource Graph Explorer (ARG)](review-security-recommendations.md). Security Center includes built-in vulnerability scanners to scan your VMs, SQL servers and their hosts, and container registries for security vulnerabilities. The findings are returned as recommendations with all the individual findings for each resource type gathered into a single view. The recommendations are: The filters added this month provide options to refine the recommendations list > > Learn more about each of these response options: >- > - [Fix button](implement-security-recommendations.md#fix-button) + > - [Fix button](implement-security-recommendations.md) > - [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md) :::image type="content" source="./media/release-notes/added-recommendations-filters.png" alt-text="Recommendations grouped by security control." lightbox="./media/release-notes/added-recommendations-filters.png"::: The policy definitions can be found in Azure Policy: Get started with [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation). -Learn more about using the two export policies in [Configure workflow automation at scale using the supplied policies](workflow-automation.md#configure-workflow-automation-at-scale-using-the-supplied-policies) and [Set up a continuous export](continuous-export.md#set-up-a-continuous-export). +Learn more about using the two export policies in [Configure workflow automation at scale using the supplied policies](workflow-automation.md) and [Set up a continuous export](continuous-export.md#set-up-a-continuous-export). ### New recommendation for using NSGs to protect non-internet-facing virtual machines In order to enable enterprise level scenarios on top of Security Center, it's no Windows Admin Center is a management portal for Windows Servers who are not deployed in Azure offering them several Azure management capabilities such as backup and system updates. We have recently added an ability to onboard these non-Azure servers to be protected by ASC directly from the Windows Admin Center experience. -Users can onboard a WAC server to Azure Security Center and enable viewing its security alerts and recommendations directly in the Windows Admin Center experience. +With this new experience users can onboard a WAC server to Azure Security Center and enable viewing its security alerts and recommendations directly in the Windows Admin Center experience. ## September 2019 |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | You can now prioritize your security recommendations according to the risk level By organizing your recommendations based on their risk level (Critical, high, medium, low), you're able to address the most critical risks within your environment and efficiently prioritize the remediation of security issues based on the actual risk such as internet exposure, data sensitivity, lateral movement possibilities, and potential attack paths that could be mitigated by resolving the recommendations. -Learn more about [risk prioritization](security-policy-concept.md). +Learn more about [risk prioritization](implement-security-recommendations.md#group-recommendations-by-risk-level). ### Attack path analysis new engine and extensive enhancements As part of security alert quality improvement process of Defender for Servers, a |--|--| | Adaptive application control policy violation was audited.[VM_AdaptiveApplicationControlWindowsViolationAudited, VM_AdaptiveApplicationControlWindowsViolationAudited] | The below users ran applications that are violating the application control policy of your organization on this machine. It can possibly expose the machine to malware or application vulnerabilities.| -To keep viewing this alert in the ΓÇ£Security alertsΓÇ¥ blade in the Microsoft Defender for Cloud portal, change the default view filter **Severity** to include **informational** alerts in the grid. +To keep viewing this alert in the ΓÇ£Security alertsΓÇ¥ page in the Microsoft Defender for Cloud portal, change the default view filter **Severity** to include **informational** alerts in the grid. :::image type="content" source="media/release-notes/add-informational-severity.png" alt-text="Screenshot that shows you where to add the informational severity for alerts." lightbox="media/release-notes/add-informational-severity.png"::: For more information, see [Migrate to SQL server-targeted Azure Monitoring Agent September 20, 2023 -You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud. +You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps page and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud. Learn more about [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security). You can learn more about data aware security posture in the following articles: - [Support and prerequisites for data-aware security posture](concept-data-security-posture-prepare.md) - [Enable data-aware security posture](data-security-posture-enable.md) - [Explore risks to sensitive data](data-security-review-risks.md)-- [Azure data attack paths](attack-path-reference.md#azure-data)-- [AWS data attack paths](attack-path-reference.md#aws-data) ### General Availability (GA): malware scanning in Defender for Storage Here's a table of the new alerts. |Alert (alert type)|Description|MITRE tactics|Severity| |-|-|-|-| | **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines aren't equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |-| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low | -| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | -| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | -| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | -| **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | -| **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | -| **Suspicious usage of VM Access extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VM Access extension was detected on your virtual machines. Attackers may abuse the VM Access extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium | -| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | -| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | -| **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | +| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low | +| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | +| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | +| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | +| **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers might abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | +| **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers might abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | +| **Suspicious usage of VM Access extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VM Access extension was detected on your virtual machines. Attackers might abuse the VM Access extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium | +| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | +| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | +| **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions). This alert focuses on identifying suspicious activities leveraging Azure virtual | Alert Display Name <br> (Alert Type) | Description | Severity | MITRE Tactic | |||||-| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact | +| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact | For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md). The NIST 800-53 standards (both R4 and R5) have recently been updated with contr These controls were previously calculated as passed controls, so you might see a significant dip in your compliance score for NIST standards between April 2023 and May 2023. -For more information on compliance controls, see [Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud](regulatory-compliance-dashboard.md#investigate-regulatory-compliance-issues). +For more information on compliance controls, see [Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud](regulatory-compliance-dashboard.md). ### Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud Learn more about [agentless container posture](concept-agentless-containers.md). ## May 2023 -Updates in May include: +Updates in might include: - [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault) - [Agentless scanning now supports encrypted disks in AWS](#agentless-scanning-now-supports-encrypted-disks-in-aws) Updates in May include: | Alert (alert type) | Description | MITRE tactics | Severity | |||:-:||-| **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern may be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium | +| **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern might be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium | For all of the available alerts, see [Alerts for Azure Key Vault](alerts-reference.md#alerts-azurekv). Defender for Resource Manager has the following new alert: | Alert (alert type) | Description | MITRE tactics | Severity | |||:-:||-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium | +| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity might be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium | You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager). |
defender-for-cloud | Review Exemptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-exemptions.md | + + Title: Exempt a recommendation in Microsoft Defender for Cloud. +description: Learn how to exempt recommendations so they're not taken into account in Microsoft Defender for Cloud. +++ Last updated : 01/02/2022+++# Review resources exempted from recommendations ++In Microsoft Defender for Cloud, you can exempt protected resources from Defender for Cloud security recommendations. [Learn more](exempt-resource.md). This article describes how to review and work with exempted resources. +++## Review exempted resources in the portal ++1. In Defender for Cloud, open the **Recommendations** page. +1. Select **Add filter** > **Is exempt**. +1. Select whether you want to see recommendations that have exempted resources, or those without exemptions. ++ :::image type="content" source="media/review-exemptions/filter-exemptions.png" alt-text="Steps to create an exemption rule to exempt a recommendation from your subscription or management group." lightbox="media/review-exemptions/filter-exemptions.png"::: ++1. In the details page for the relevant recommendation, review the exemption rules. ++1. For each resource, the **Reason** column shows why the resource is exempted. To modify the exemption settings for a resource, select the ellipsis in the resource > **Manage exemption**. ++You can also review exempted resources on the Defender for Cloud > **Inventory** page. In the page, select **Add filter**. In the **Filter** dropdown list, select **Contains Exemptions** to find all resources that have been exempted from one or more recommendations. ++++## Review exempted resources with Azure Resource Graph ++[Azure Resource Graph (ARG)](../governance/resource-graph/index.yml) provides instant access to resource information across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to [query information](../governance/resource-graph/first-query-portal.md) using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). ++To view all recommendations that have exemption rules: ++1. In the **Recommendations** page, select **Open query**. +1. Enter the following query and select **Run query**. ++ ```kusto + securityresources + | where type == "microsoft.security/assessments" + // Get recommendations in useful format + | project + ['TenantID'] = tenantId, + ['SubscriptionID'] = subscriptionId, + ['AssessmentID'] = name, + ['DisplayName'] = properties.displayName, + ['ResourceType'] = tolower(split(properties.resourceDetails.Id,"/").[7]), + ['ResourceName'] = tolower(split(properties.resourceDetails.Id,"/").[8]), + ['ResourceGroup'] = resourceGroup, + ['ContainsNestedRecom'] = tostring(properties.additionalData.subAssessmentsLink), + ['StatusCode'] = properties.status.code, + ['StatusDescription'] = properties.status.description, + ['PolicyDefID'] = properties.metadata.policyDefinitionId, + ['Description'] = properties.metadata.description, + ['RecomType'] = properties.metadata.assessmentType, + ['Remediation'] = properties.metadata.remediationDescription, + ['Severity'] = properties.metadata.severity, + ['Link'] = properties.links.azurePortal + | where StatusDescription contains "Exempt" + ``` +++## Get notified when exemptions are created ++To keep track of how users are exempting resources from recommendations, we've created an Azure Resource Manager (ARM) template that deploys a Logic App Playbook, and all necessary API connections to notify you when an exemption has been created. ++- Learn more about the playbook in TechCommunity blog [How to keep track of Resource Exemptions in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-keep-track-of-resource-exemptions-in-azure-security/ba-p/1770580). +- Locate the ARM template in [Microsoft Defender for Cloud GitHub repository](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation/Notify-ResourceExemption) +- [Use this automated process](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Security-Center%2Fmaster%2FWorkflow%2520automation%2FNotify-ResourceExemption%2Fazuredeploy.json) to deploy all components. +++## Next steps ++[Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Review Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md | Title: Improving your security posture with recommendations -description: This document walks you through how to identify security recommendations that will help you improve your security posture. + Title: Review security recommendations in Microsoft Defender for Cloud +description: Learn how to review security recommendations in Microsoft Defender for Cloud Previously updated : 01/10/2023 Last updated : 11/08/2023 -# Find recommendations that can improve your security posture +# Review security recommendations -To improve your [secure score](secure-score-security-controls.md), you have to implement the security recommendations for your environment. From the list of recommendations, you can use filters to find the recommendations that have the most impact on your score, or the ones that you were assigned to implement. +In Microsoft Defender for Cloud, resources and workloads are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture. -To get to the list of recommendations: +This article describes how to review security recommendations in your Defender for Cloud deployment using the latest version of the portal experience. -1. Sign in to the [Azure portal](https://portal.azure.com). +## Get an overview -1. Either: - - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment you want to improve. - - Go to **Recommendations** in the Defender for Cloud menu. +In the Defender for Cloud portal > **Overview** dashboard, get a holistic look at your environment, including security recommendations. -You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations. Look at the [details of the recommendation](security-policy-concept.md) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-a-security-recommendation). +- **Active recommendations**: Recommendations that are active in your environment. +- **Unassigned recommendations**: See which recommendations don't have owners assigned to them. +- **Overdue recommendations**: Recommendations that have an expired due date. +- **Attack paths**: See the number of attack paths. -You can learn more by watching this video from the Defender for Cloud in the Field video series: -- [Security posture management improvements](episode-four.md) -## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a> +## Review recommendations -Your [secure score is calculated](secure-score-security-controls.md) based on the security recommendations that you've implemented. In order to increase your score and improve your security posture, you have to find recommendations with unhealthy resources and [remediate those recommendations](implement-security-recommendations.md). +1. In Defender for Cloud, open the **Recommendations** page. +1. For each recommendation, review: -The list of recommendations shows the **Potential score increase** that you can achieve when you remediate all of the recommendations in the security control. + - **Risk level** - Specifies whether the recommendation risk is Critical, High, Medium or Low. + - **Affected resource** - Indicated affected resources. + - **Risk factors** - Environmental factors of the resource affected by the recommendation, which influences the exploitability and the business effect of the underlying security issue. For example, Internet exposure, sensitive data, lateral movement potential and more. + - **Attack Paths** - The number of attack paths. + - **Owner** - The person assigned to this recommendation. + - **Due date** - Indicates the due date for fixing the recommendation. + - **Recommendation status** indicates whether the recommendation has been assigned, and whether the due date for fixing the recommendation has passed. + -To find recommendations that can improve your secure score: +## Review recommendation details -1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score. - - You can also use the search box and filters above the list of recommendations to find specific recommendations. -1. Open a security control to see the recommendations that have unhealthy resources. +1. In the **Recommendations** page, select the recommendation. +1. In the recommendation page, review the details: + - **Description** - A short description of the security issue. + - **Attack Paths** - The number of attack paths. + - **Scope** - The affected subscription or resource. + - **Freshness** - The freshness interval for the recommendation. + - **Last change date** - The date this recommendation last had a change + - **Owner** - The person assigned to this recommendation. + - **Due date** - The assigned date the recommendation must be resolved by. + - **Findings by severity** - The total findings by severity. + - **Tactics & techniques** - The tactics and techniques mapped to MITRE ATT&CK. -When you [remediate](implement-security-recommendations.md) all of the recommendations in the security control, your secure score increases by the percentage point listed for the control. + :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Screenshot of the recommendation details page with labels for each element." lightbox="./media/security-policy-concept/recommendation-details-page.png"::: -## Manage the owner and ETA of recommendations that are assigned to you +## Explore a recommendation -[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you're accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy. +You can perform a number of actions to interact with recommendations. If an option isn't available, it isn't relevant for the recommendation. -Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't affect the secure score. The security team can also apply a grace period during which overdue recommendations continue to not affect the secure score. +1. In the **Recommendations** page, select a recommendation. +1. Select **Open query** to view detailed information about the affected resources using an Azure Resource Graph Explorer query +1. Select **View policy definition** to view the Azure Policy entry for the underlying recommendation (if relevant). +1. In **Review findings**, you can review affiliated findings by severity. + + :::image type="content" source="media/review-security-recommendations/recommendation-findings.png" alt-text="Screenshot of the findings tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-findings.png"::: -To help you plan your work and report on progress, you can set an ETA for the specific resources to show when you plan to have the recommendation resolved by for those resources. You can also change the owner of the recommendation for specific resources so that the person responsible for remediation is assigned to the resource. +1. In **Take action**: + - **Remediate**: A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option, you can select **View remediation logic** before applying the suggested fix to your resources. + - **Assign owner and due date**: If you have a [governance rule](governance-rules.md) turned on for the recommendation, you can assign an owner and due date. + - **Exempt**: You can exempt resources from the recommendation, or disable specific findings using disable rules. + - **Workflow automation**: Set a logic app to trigger with this recommendation. +1. In **Graph**, you can view and investigate all context that is used for risk prioritization, including [attack paths](how-to-manage-attack-path.md). + :::image type="content" source="media/review-security-recommendations/recommendation-graph.png" alt-text="Screenshot of the graph tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-graph.png"::: -To change the owner of resources and set the ETA for remediation of recommendations that are assigned to you: -1. In the filters for list of recommendations, select **Show my items only**. - - The status column indicates the recommendations that are on time, overdue, or completed. - - The insights column indicates the recommendations that are in a grace period, so they currently don't affect your secure score until they become overdue. +## Manage recommendations assigned to you -1. Select an on time or overdue recommendation. -1. For the resources that are assigned to you, set the owner of the resource: - 1. Select the resources that are owned by another person, and select **Change owner and set ETA**. - 1. Select **Change owner**, enter the email address of the owner of the resource, and select **Save**. +Defender for Cloud supports governance rules for recommendations, to specify a recommendation owner or due date for action. Governance rules help ensure accountability and an SLA for recommendations. - The owner of the resource gets a weekly email listing the recommendations that they're assigned. +- Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. +- Before the recommendation is overdue, the recommendation doesn't affect the secure score. +- You can also apply a grace period during which overdue recommendations continue to not affect the secure score. -1. For resources that you own, set an ETA for remediation: - 1. Select resources that you plan to remediate by the same date, and select **Change owner and set ETA**. - 1. Select **Change ETA** and set the date by which you plan to remediate the recommendation for those resources. - 1. Enter a justification for the remediation by that date, and select **Save**. +[Learn more](governance-rules.md) about configuring governance rules. -The due date for the recommendation doesn't change, but the security team can see that you plan to update the resources by the specified ETA date. +Manage recommendations assigned to you as follows: -## Review recommendation data in Azure Resource Graph (ARG) +1. In the Defender for Cloud portal > **Recommendations** page, select **Add filter** > **Owner**. -You can review recommendations in ARG both on the Recommendations page or on an individual recommendation. +1. Select your user entry. +1. In the recommendation results, review the recommendations, including affected resources, risk factors, attack paths, due dates, and status. +1. Select a recommendation to review it further. +1. In **Take action** > **Change owner & due date**, you change the recommendation owner and due date if needed. + - By default the owner of the resource gets a weekly email listing the recommendations assigned to them. + - If you select a new remediation date, in **Justification** specify reasons for remediation by that date. + - In **Set email notifications** you can: + - Override the default weekly email to the owner. + - Notify owners weekly with a list of open/overdue tasks. + - Notify the owner's direct manager with an open task list. +1. Select **Save**. -The toolbar on the Recommendations page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data. +> [!NOTE] +> Changing the expected completion date doesn't change the due date for the recommendation, but security partners can see that you plan to update the resources by the specified date. -ARG is designed to provide efficient resource exploration with the ability to query at scale across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal. +## Review recommendations in Azure Resource Graph -Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), you can cross-reference Defender for Cloud data with other resource properties. +You can use [Azure Resource Graph](../governance/resource-graph/index.yml) to query Defender for Cloud security posture data across multiple subscriptions. Azure Resource Graph provides an efficient way to query at scale across cloud environments by viewing, filtering, grouping, and sorting data. -For example, this recommendation details page shows 15 affected resources: +1. In the Defender for Cloud portal > **Recommendations** page > select **Open query**. +1. In [Azure Resource Graph](../governance/resource-graph/index.yml), write a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). +1. You can open the query in one of two ways: -When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same 15 resources and their health status for this recommendation: + - **Query returning affected resource** - Returns a list of all of the resources affected by this recommendation. + - **Query returning security findings** - Returns a list of all security issues found by the recommendation. -## Recommendation insights +### Example -The Insights column of the page gives you more details for each recommendation. The options available in this section include: +In this example, this recommendation details page shows 15 affected resources: -| Icon | Name | Description | -|--|--|--| -| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | **Preview recommendation** | This recommendation won't affect your secure score until it's GA. | -| :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: | **Fix** | From within the recommendation details page, you can use 'Fix' to resolve this issue. | -| :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: | **Enforce** | From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource. | -| :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: | **Deny** | From within the recommendation details page, you can prevent new resources from being created with this issue. | -Recommendations that aren't included in the calculations of your secure score, should still be remediated wherever possible, so that when the period ends they'll contribute towards your score instead of against it. +When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same affected resources for this recommendation: -## Download recommendations to a CSV report -Recommendations can be downloaded to a CSV report from the Recommendations page. -To download a CSV report of your recommendations: --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**. -1. Select **Download CSV report**. -- :::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download C S V report from."::: --You'll know the report is being prepared when the pop-up appears. ---When the report is ready, you'll be notified by a second pop-up. ---## Learn more --You can check out the following blogs: --- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)-- [New enhancements added to network security dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-enhancements-added-to-network-security-dashboard/ba-p/2896021) ## Next steps -In this document, you were introduced to security recommendations in Defender for Cloud. For related information: +[Remediate security recommendations](implement-security-recommendations.md) -- [Remediate recommendations](implement-security-recommendations.md)-Learn how to configure security policies for your Azure subscriptions and resource groups.-- [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).-- [Automate responses to Defender for Cloud triggers](workflow-automation.md)-Automate responses to recommendations-- [Exempt a resource from a recommendation](exempt-resource.md)-- [Security recommendations - a reference guide](recommendations-reference.md) |
defender-for-cloud | Secure Score Security Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md | Title: Secure score in Microsoft Defender for Cloud description: Learn about the Microsoft Cloud Security Benchmark secure score in Microsoft Defender for Cloud Previously updated : 06/19/2023 Last updated : 11/16/2023 # Secure score When you turn on Defender for Cloud in a subscription, the [Microsoft cloud secu Recommendations are issued based on assessment findings. Only built-in recommendations from the MSCB impact the secure score. -> [!Note] +> [!NOTE] > Recommendations flagged as **Preview** aren't included in secure score calculations. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score. Preview recommendations are marked with: :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: > [!NOTE] On the **Recommendations** page > **Secure score recommendations** tab in Defend Each control is calculated every eight hours for each Azure subscription, or AWS/GCP cloud connector. -> [!Important] +> [!IMPORTANT] > Recommendations within a control are updated more frequently than the control, and so there might be discrepancies between the resources count on the recommendations versus the one found on the control. ### Example scores for a control |
defender-for-cloud | Tutorial Enable Container Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md | You can learn more about Defender for Container's pricing on the [pricing page]( - [Connect your AWS account to Microsoft Defender for Cloud](quickstart-onboard-aws.md#connect-your-aws-account) -- Validate the following domains only if you're using a relevant OS.-- | Domain | Port | Host operating systems | - | -- | - |--| - | amazonlinux.*.amazonaws.com/2/extras/\* | 443 | Amazon Linux 2 | - | yum default repositories | - | RHEL / Centos | - | apt default repositories | - | Debian | +- Verify your Kubernetes nodes can access source repositories of your package manager. - Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md) are validated. |
defender-for-cloud | Tutorial Enable Container Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md | You can learn more about Defender for Container's pricing on the [pricing page]( - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md#connect-your-gcp-project). -- Validate the following domains only if you're using a relevant OS.-- | Domain | Port | Host operating systems | - | -- | - |--| - | amazonlinux.*.amazonaws.com/2/extras/\* | 443 | Amazon Linux 2 | - | yum default repositories | - | RHEL / Centos | - | apt default repositories | - | Debian | +- Verify your Kubernetes nodes can access source repositories of your package manager. - Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md) are validated. |
defender-for-cloud | Workflow Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md | -# Automate responses to Microsoft Defender for Cloud triggers +# Automate remediation responses Every security program includes multiple workflows for incident response. These processes might include notifying relevant stakeholders, launching a change management process, and applying specific remediation steps. Security experts recommend that you automate as many steps of those procedures as you can. Automation reduces overhead. It can also improve your security by ensuring the process steps are done quickly, consistently, and according to your predefined requirements. This article describes the workflow automation feature of Microsoft Defender for Cloud. This feature can trigger consumption logic apps on security alerts, recommendations, and changes to regulatory compliance. For example, you might want Defender for Cloud to email a specific user when an alert occurs. You'll also learn how to create logic apps using [Azure Logic Apps](../logic-apps/logic-apps-overview.md). -## Availability +## Before you start ++- You need **Security admin role** or **Owner** on the resource group. +- You must also have write permissions for the target resource. +- To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions: ++ - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones) + - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification. + +- If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances). -|Aspect|Details| -|-|:-| -|Release state:|General availability (GA)| -|Pricing:|Free| -|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification<br>If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)| -|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)| ## Create a logic app and define when it should automatically run This article describes the workflow automation feature of Microsoft Defender for :::image type="content" source="./media/workflow-automation/list-of-workflow-automations.png" alt-text="Screenshot of workflow automation page showing the list of defined automations." lightbox="./media/workflow-automation/list-of-workflow-automations.png"::: - From this page you can create new automation rules, enable, disable, or delete existing ones. -- > [!NOTE] - > A scope refers to the subscription where the workflow automation is deployed. +1. From this page, create new automation rules, enable, disable, or delete existing ones. A scope refers to the subscription where the workflow automation is deployed. 1. To define a new workflow, select **Add workflow automation**. The options pane for your new automation opens. :::image type="content" source="./media/workflow-automation/add-workflow.png" alt-text="Add workflow automations pane." lightbox="media/workflow-automation/add-workflow.png"::: - Here you can enter: - 1. A name and description for the automation. - 1. The triggers that will initiate this automatic workflow. For example, you might want your logic app to run when a security alert that contains "SQL" is generated. +1. Enter the following: - > [!NOTE] - > If your trigger is a recommendation that has "sub-recommendations", for example **Vulnerability assessment findings on your SQL databases should be remediated**, the logic app will not trigger for every new security finding; only when the status of the parent recommendation changes. + - A name and description for the automation. + - The triggers that will initiate this automatic workflow. For example, you might want your logic app to run when a security alert that contains "SQL" is generated. - 1. The consumption logic app that will run when your trigger conditions are met. + If your trigger is a recommendation that has "sub-recommendations", for example **Vulnerability assessment findings on your SQL databases should be remediated**, the logic app will not trigger for every new security finding; only when the status of the parent recommendation changes. ++1. Specify the consumption logic app that will run when your trigger conditions are met. 1. From the Actions section, select **visit the Logic Apps page** to begin the logic app creation process. This article describes the workflow automation feature of Microsoft Defender for > [!TIP] > Sometimes in a logic app, parameters are included in the connector as part of a string and not in their own field. For an example of how to extract parameters, see step #14 of [Working with logic app parameters while building Microsoft Defender for Cloud workflow automations](https://techcommunity.microsoft.com/t5/azure-security-center/working-with-logic-app-parameters-while-building-azure-security/ba-p/1342121). - The logic app designer supports the following Defender for Cloud triggers: -- - **When a Microsoft Defender for Cloud Recommendation is created or triggered** - If your logic app relies on a recommendation that gets deprecated or replaced, your automation will stop working and you'll need to update the trigger. To track changes to recommendations, use the [release notes](release-notes.md). +## Supported triggers - - **When a Defender for Cloud Alert is created or triggered** - You can customize the trigger so that it relates only to alerts with the severity levels that interest you. +The logic app designer supports the following Defender for Cloud triggers: - - **When a Defender for Cloud regulatory compliance assessment is created or triggered** - Trigger automations based on updates to regulatory compliance assessments. +- **When a Microsoft Defender for Cloud Recommendation is created or triggered** - If your logic app relies on a recommendation that gets deprecated or replaced, your automation will stop working and you'll need to update the trigger. To track changes to recommendations, use the [release notes](release-notes.md). - > [!NOTE] - > If you are using the legacy trigger "When a response to a Microsoft Defender for Cloud alert is triggered", your logic apps will not be launched by the Workflow Automation feature. Instead, use either of the triggers mentioned above. +- **When a Defender for Cloud Alert is created or triggered** - You can customize the trigger so that it relates only to alerts with the severity levels that interest you. - [![Sample logic app.](media/workflow-automation/sample-logic-app.png)](media/workflow-automation/sample-logic-app.png#lightbox) +- **When a Defender for Cloud regulatory compliance assessment is created or triggered** - Trigger automations based on updates to regulatory compliance assessments. -1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). Select **Refresh** to ensure your new logic app is available for selection. +> [!NOTE] +> If you are using the legacy trigger "When a response to a Microsoft Defender for Cloud alert is triggered", your logic apps will not be launched by the Workflow Automation feature. Instead, use either of the triggers mentioned above. - ![Refresh.](media/workflow-automation/refresh-the-list-of-logic-apps.png) +1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). +1. Select **Refresh** to ensure your new logic app is available for selection. 1. Select your logic app and save the automation. The logic app dropdown only shows those with supporting Defender for Cloud connectors mentioned above. ## Manually trigger a logic app You can also run logic apps manually when viewing any security alert or recommendation. -To manually run a logic app, open an alert, or a recommendation and select **Trigger logic app**: +To manually run a logic app, open an alert, or a recommendation and select **Trigger logic app**. [![Manually trigger a logic app.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox) -## Configure workflow automation at scale using the supplied policies +## Configure workflow automation at scale Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents. To implement these policies: |Workflow automation for security recommendations |[Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c| - > [!TIP] - > You can also find these by searching Azure Policy: - > - > 1. Open Azure Policy. - > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Accessing Azure Policy."::: - > 2. From the Azure Policy menu, select **Definitions** and search for them by name. + + You can also find these by searching Azure Policy. In Azure Policy, select **Definitions** and search for them by name. + 1. From the relevant Azure Policy page, select **Assign**. :::image type="content" source="./media/workflow-automation/export-policy-assign.png" alt-text="Assigning the Azure Policy."::: -1. Open each tab and set the parameters as desired: - 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use the workflow automation configuration. - 1. In the Parameters tab, enter the required information. +1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use the workflow automation configuration. +1. In the **Parameters** tab, enter the required information. :::image type="content" source="media/workflow-automation/parameters-tab.png" alt-text="Screenshot of the parameters tab."::: - 1. (Optional), Apply this assignment to an existing subscription in the **Remediation** tab and select the option to create a remediation task. +1. Optionally apply this assignment to an existing subscription in the **Remediation** tab and select the option to create a remediation task. 1. Review the summary page and select **Create**. |
devtest-labs | Lab Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/lab-services-overview.md | - Title: Azure Lab Services vs. Azure DevTest Labs -description: Compare features, scenarios, and use cases for Azure DevTest Labs and Azure Lab Services. --- Previously updated : 11/15/2021----# Compare Azure DevTest Labs and Azure Lab Services --You can use two different Azure services to set up lab environments in the cloud: --- [Azure DevTest Labs](devtest-lab-overview.md) provides development or test cloud environments for your team.-- In DevTest Labs, a lab owner [creates a lab](devtest-lab-create-lab.md) and makes it available to lab users. The owner provisions the lab with Windows or Linux virtual machines (VMs) that have all necessary software and tools. Lab users connect to lab VMs for daily work and short-term projects. Lab administrators can analyze resource usage and costs across multiple labs, and set overarching policies to optimize organization or team costs. --- [Azure Lab Services](../lab-services/lab-services-overview.md) provides managed classroom labs.-- Lab Services does all infrastructure management, from spinning up VMs and scaling infrastructure to handling errors. After an IT administrator creates a Lab Services lab account, instructors can [create labs](../lab-services/quick-create-lab-plan-portal.md) in the account. An instructor specifies the number and type of VMs they need for the class, and adds users to the class. Once users register in the class, they can access the VMs to do class exercises and homework. --## Key capabilities --DevTest Labs and Lab Services support the following key capabilities and features: --- **Fast and flexible lab setup**. Lab owners and instructors can quickly set up labs for their needs. Lab Services takes care of all Azure infrastructure work, and provides built-in infrastructure scaling and resiliency for managed labs. In DevTest Labs, lab owners can self-manage and customize infrastructure.--- **Simplified lab user experience**. In a Lab Services classroom lab, users can register with a code and access the lab to use resources. A DevTest Labs lab owner can give permissions for lab users to create and access VMs, manage and reuse data disks, and set up reusable secrets.--- **Cost optimization and analysis**. In Lab Services, you can give each student a limited number of hours for using the VMs. A DevTest Labs lab owner can set a lab schedule to specify when lab VMs are accessible to users. The schedule can automatically shut down and start up VMs at specified times. The lab owner can set usage policies per user or per lab to optimize costs. Lab owners can analyze lab usage and activity trends. Classroom labs offer a smaller subset of cost optimization and analysis options.--DevTest Labs also supports the following features: --- **Embedded security**. A lab owner can set up a private virtual network and subnets for a lab, and enable a shared public IP address. DevTest Labs lab users can securely access virtual network resources by using Azure ExpressRoute or a site-to-site virtual private network (VPN).--- **Workflow and tool integration**. In DevTest Labs, you can automatically provision environments from within your continuous integration/continuous deployment (CI/CD) tools. You can integrate DevTest Labs into your organization's website and management systems.--## Scenarios --Here are typical scenarios for Lab Services and DevTest Labs: --### Set up a resizable classroom computer lab in the cloud --- To create a managed classroom lab, you just tell Lab Services what you need. The service creates and manages lab infrastructure so you can focus on teaching your class, not technical details.-- Lab Services provides students with a lab of VMs that are configured with exactly what's needed. You can give each student a limited number of hours for using the VMs.-- You can move your school's physical computer lab into the cloud. Lab Services automatically scales the number of VMs to only the maximum usage and cost threshold you set.-- You can delete labs with a single click when you're done with them.--### Use DevTest Labs for development and test environments --You can use DevTest Labs for many key scenarios. One primary scenario is to host development and test machines. DevTest Labs provides these benefits for developers and testers: --- Lab owners and users can provision Windows and Linux environments by using reusable templates and artifacts.-- Developers can quickly provision development machines on demand, and easily customize their machines when necessary.-- Testers can test the latest application version, and scale up load testing by provisioning multiple test agents.-- Administrators can control costs by ensuring that developers and testers can't get more VMs than they need.-- Administrators can ensure that VMs are shut down when not in use.--For more information, see [Use DevTest Labs for development](devtest-lab-developer-lab.md) and [Use DevTest Labs for testing](devtest-lab-test-env.md). --## Types of labs --You can create two types of labs: **managed labs** with Lab Services, or **labs** with DevTest Labs. If you just want to input your needs and let the service set up and manage required lab infrastructure, select **classroom lab** from the **managed lab types** in Lab Services. If you want to manage your own infrastructure, create labs by using DevTest Labs. --The following sections provide more details about these lab types. --### Managed labs --Managed labs are Lab Services labs with infrastructure that Azure manages. Managed lab types can fit specific needs, like classroom labs. --With managed labs, you can get started right away, with minimal setup. To create a classroom lab, first you create a lab account for your organization. The lab account serves as the central account for managing all the labs in the organization. --For managed labs, Lab Services creates and manages Azure resources in internal Microsoft subscriptions, not in your own Azure subscription. The service keeps track of resource usage in the internal subscriptions, and bills usage back to the Azure subscription that contains the lab account. --Here are some use cases for managed lab types: --- Provide students with a lab of VMs that have exactly what's needed for a class.-- Limit the number of hours that students can use VMs.-- Set up a pool of high-performance VMs to do compute-intensive or graphics-intensive research.-- Move a school's physical computer lab into the cloud.-- Quickly provision a lab of VMs for hosting a hackathon.--### DevTest Labs --You might want to manage all lab infrastructure and configuration yourself, within your own Azure subscription. For this scenario, create a DevTest Labs lab in the Azure portal. You don't create or use a lab account for DevTest Labs. --Here are some use cases for DevTest Labs: --- Quickly provision a lab of VMs to host a hackathon or hands-on conference session.-- Create a pool of VMs configured with an application to use for bug bashes.-- Provide developers with VMs configured with all the tools they need.-- Repeatedly create labs of test machines to test the latest bits.-- Set up differently configured VMs and multiple test agents for scale and performance testing.-- Offer customer training sessions in a lab configured with a product's latest version.--## Lab Services vs. DevTest Labs --The following table compares the two types of Azure lab environments: --| Feature | Azure Lab Services | Azure DevTest Labs -| -- | -- | -- | -| Management of Azure infrastructure | Automatically infrastructure management | You manage the infrastructure manually | -| Built-in resiliency | Automatic handling of resiliency | You handle resiliency manually | -| Subscription management | The service handles allocation of resources within Microsoft subscriptions that back the service. | You manage the subscription within your own Azure subscription. | -| Autoscaling. | Service automatically scales | No subscription autoscaling | -| Azure Resource Manager deployment within the lab | Not available | Available | - |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | Title: Microsoft Azure Data Manager for Energy entitlement concepts -description: This article describes the various concepts regarding the entitlement services in |